WorldWideScience

Sample records for parallel factor analysis

  1. Parallel factor analysis PARAFAC of process affected water

    Energy Technology Data Exchange (ETDEWEB)

    Ewanchuk, A.M.; Ulrich, A.C.; Sego, D. [Alberta Univ., Edmonton, AB (Canada). Dept. of Civil and Environmental Engineering; Alostaz, M. [Thurber Engineering Ltd., Calgary, AB (Canada)

    2010-07-01

    A parallel factor analysis (PARAFAC) of oil sands process-affected water was presented. Naphthenic acids (NA) are traditionally described as monobasic carboxylic acids. Research has indicated that oil sands NA do not fit classical definitions of NA. Oil sands organic acids have toxic and corrosive properties. When analyzed by fluorescence technology, oil sands process-affected water displays a characteristic peak at 290 nm excitation and approximately 346 nm emission. In this study, a parallel factor analysis (PARAFAC) was used to decompose process-affected water multi-way data into components representing analytes, chemical compounds, and groups of compounds. Water samples from various oil sands operations were analyzed in order to obtain EEMs. The EEMs were then arranged into a large matrix in decreasing process-affected water content for PARAFAC. Data were divided into 5 components. A comparison with commercially prepared NA samples suggested that oil sands NA is fundamentally different. Further research is needed to determine what each of the 5 components represent. tabs., figs.

  2. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    Science.gov (United States)

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  3. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments

    NARCIS (Netherlands)

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode,

  4. Sparse Probabilistic Parallel Factor Analysis for the Modeling of PET and Task-fMRI Data

    DEFF Research Database (Denmark)

    Beliveau, Vincent; Papoutsakis, Georgios; Hinrich, Jesper Løve

    2017-01-01

    Modern datasets are often multiway in nature and can contain patterns common to a mode of the data (e.g. space, time, and subjects). Multiway decomposition such as parallel factor analysis (PARAFAC) take into account the intrinsic structure of the data, and sparse versions of these methods improv...

  5. Spectral analysis of parallel incomplete factorizations with implicit pseudo­-overlap

    NARCIS (Netherlands)

    Magolu monga Made, Mardochée; Vorst, H.A. van der

    2000-01-01

    Two general parallel incomplete factorization strategies are investigated. The techniques may be interpreted as generalized domain decomposition methods. In contrast to classical domain decomposition methods, adjacent subdomains exchange data during the construction of the incomplete

  6. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E

    2018-03-01

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.

  7. Monitoring organic loading to swimming pools by fluorescence excitation–emission matrix with parallel factor analysis (PARAFAC)

    DEFF Research Database (Denmark)

    Seredynska-Sobecka, Bozena; Stedmon, Colin; Boe-Hansen, Rasmus

    2011-01-01

    Fluorescence Excitation–Emission Matrix spectroscopy combined with parallel factor analysis was employed to monitor water quality and organic contamination in swimming pools. The fluorescence signal of the swimming pool organic matter was low but increased slightly through the day. The analysis...... revealed that the organic matter fluorescence was characterised by five different components, one of which was unique to swimming pool organic matter and one which was specific to organic contamination. The latter component had emission peaks at 420nm and was found to be a sensitive indicator of organic...... loading in swimming pool water. The fluorescence at 420nm gradually increased during opening hours and represented material accumulating through the day....

  8. Metabolic profiling based on two-dimensional J-resolved 1H NMR data and parallel factor analysis

    DEFF Research Database (Denmark)

    Yilmaz, Ali; Nyberg, Nils T; Jaroszewski, Jerzy W.

    2011-01-01

    the intensity variances along the chemical shift axis are taken into account. Here, we describe the use of parallel factor analysis (PARAFAC) as a tool to preprocess a set of two-dimensional J-resolved spectra with the aim of keeping the J-coupling information intact. PARAFAC is a mathematical decomposition......-model was done automatically by evaluating amount of explained variance and core consistency values. Score plots showing the distribution of objects in relation to each other, and loading plots in the form of two-dimensional pseudo-spectra with the same appearance as the original J-resolved spectra...

  9. Parallel Factor Analysis as an exploratory tool for wavelet transformed event-related EEG

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Hermann, Cristoph S.

    2006-01-01

    by the inter-trial phase coherence (ITPC) encompassing ANOVA analysis of differences between conditions and 5-way analysis of channel x frequency x time x subject x condition. A flow chart is presented on how to perform data exploration using the PARAFAC decomposition on multi-way arrays. This includes (A......) channel x frequency x time 3-way arrays of F test values from a repeated measures analysis of variance (ANOVA) between two stimulus conditions; (B) subject-specific 3-way analyses; and (C) an overall 5-way analysis of channel x frequency x time x subject x condition. The PARAFAC decompositions were able...... of the 3-way array of ANOVA F test values clearly showed the difference of regions of interest across modalities, while the 5-way analysis enabled visualization of both quantitative and qualitative differences. Consequently, PARAFAC is a promising data exploratory tool in the analysis of the wavelets...

  10. Sports Nutrition and Doping Factors in Synchronized Swimming: Parallel Analysis among Athletes and Coaches

    Directory of Open Access Journals (Sweden)

    Gordana Furjan Mandic

    2013-12-01

    Full Text Available Although nutrition and doping are important factors in sports, neither is often investigated in synchronized swimming (Synchro.This study aimed to define and compare Synchro athletes and their coaches on their knowledge of sports nutrition (KSNand knowledge of doping (KD; and to study factors related to KSN and KD in each of these groups. Additionally, the KSNand KD questionnaires were evaluated for their reliability and validity. Altogether, 82 athletes (17.2 ± 1.92 years of age and 28 coaches (30.8 ± 5.26 years of age from Croatia and Serbia were included in the study, with a 99% response rate. The testand retest correlations were 0.94 and 0.90 for the KD and KSN,respectively. Subjects responded equally to 91% queries of the KD and 89% queries of the KSN. Although most of the coache sare highly educated, they declared self-education as the primary source of information about doping and sport-nutrition. Coaches scored higher than their athletes on both questionnaires which defined appropriate discriminative validity of the questionnaires. Variables such as age, sports experience and formal education are positively correlated to KSN and KD scores among athletes. The athletes who scored better on the KD are less prone to doping behavior in the future. These data reinforce the need for systematic educational programs on doping and sports nutrition in synchronized swimming. Special attention should be placed on younger athletes.

  11. Sports Nutrition and Doping Factors in Synchronized Swimming: Parallel Analysis among Athletes and Coaches.

    Science.gov (United States)

    Furjan Mandic, Gordana; Peric, Mia; Krzelj, Lucijana; Stankovic, Sladana; Zenic, Natasa

    2013-01-01

    Although nutrition and doping are important factors in sports, neither is often investigated in synchronized swimming (Synchro).This study aimed to define and compare Synchro athletes and their coaches on their knowledge of sports nutrition (KSN)and knowledge of doping (KD); and to study factors related to KSN and KD in each of these groups. Additionally, the KSNand KD questionnaires were evaluated for their reliability and validity. Altogether, 82 athletes (17.2 ± 1.92 years of age) and 28 coaches (30.8 ± 5.26 years of age) from Croatia and Serbia were included in the study, with a 99% response rate. The testand retest correlations were 0.94 and 0.90 for the KD and KSN,respectively. Subjects responded equally to 91% queries of the KD and 89% queries of the KSN. Although most of the coache sare highly educated, they declared self-education as the primary source of information about doping and sport-nutrition. Coaches scored higher than their athletes on both questionnaires which defined appropriate discriminative validity of the questionnaires. Variables such as age, sports experience and formal education are positively correlated to KSN and KD scores among athletes. The athletes who scored better on the KD are less prone to doping behavior in the future. These data reinforce the need for systematic educational programs on doping and sports nutrition in synchronized swimming. Special attention should be placed on younger athletes. Key PointsAlthough most of the synchro coaches are highly educated, self-education is declared as the primary source of information about doping and sportnutrition.The knowledge of doping and doping-health hazards are negatively related to potential doping behavior in the future among synchronized swimmersThe data reinforce the need for systematic educational programs on doping and sports nutrition in synchronized swimming.We advocate improving the knowledge of sports nutrition among older coaches and the knowledge of doping among

  12. Sports Nutrition and Doping Factors in Synchronized Swimming: Parallel Analysis among Athletes and Coaches

    Science.gov (United States)

    Furjan Mandic, Gordana; Peric, Mia; Krzelj, Lucijana; Stankovic, Sladana; Zenic, Natasa

    2013-01-01

    Although nutrition and doping are important factors in sports, neither is often investigated in synchronized swimming (Synchro).This study aimed to define and compare Synchro athletes and their coaches on their knowledge of sports nutrition (KSN)and knowledge of doping (KD); and to study factors related to KSN and KD in each of these groups. Additionally, the KSNand KD questionnaires were evaluated for their reliability and validity. Altogether, 82 athletes (17.2 ± 1.92 years of age) and 28 coaches (30.8 ± 5.26 years of age) from Croatia and Serbia were included in the study, with a 99% response rate. The testand retest correlations were 0.94 and 0.90 for the KD and KSN,respectively. Subjects responded equally to 91% queries of the KD and 89% queries of the KSN. Although most of the coache sare highly educated, they declared self-education as the primary source of information about doping and sport-nutrition. Coaches scored higher than their athletes on both questionnaires which defined appropriate discriminative validity of the questionnaires. Variables such as age, sports experience and formal education are positively correlated to KSN and KD scores among athletes. The athletes who scored better on the KD are less prone to doping behavior in the future. These data reinforce the need for systematic educational programs on doping and sports nutrition in synchronized swimming. Special attention should be placed on younger athletes. Key Points Although most of the synchro coaches are highly educated, self-education is declared as the primary source of information about doping and sportnutrition. The knowledge of doping and doping-health hazards are negatively related to potential doping behavior in the future among synchronized swimmers The data reinforce the need for systematic educational programs on doping and sports nutrition in synchronized swimming. We advocate improving the knowledge of sports nutrition among older coaches and the knowledge of doping among

  13. Supercritical Fluid Chromatography of Drugs: Parallel Factor Analysis for Column Testing in a Wide Range of Operational Conditions

    Science.gov (United States)

    Al-Degs, Yahya; Andri, Bertyl; Thiébaut, Didier; Vial, Jérôme

    2017-01-01

    Retention mechanisms involved in supercritical fluid chromatography (SFC) are influenced by interdependent parameters (temperature, pressure, chemistry of the mobile phase, and nature of the stationary phase), a complexity which makes the selection of a proper stationary phase for a given separation a challenging step. For the first time in SFC studies, Parallel Factor Analysis (PARAFAC) was employed to evaluate the chromatographic behavior of eight different stationary phases in a wide range of chromatographic conditions (temperature, pressure, and gradient elution composition). Design of Experiment was used to optimize experiments involving 14 pharmaceutical compounds present in biological and/or environmental samples and with dissimilar physicochemical properties. The results showed the superiority of PARAFAC for the analysis of the three-way (column × drug × condition) data array over unfolding the multiway array to matrices and performing several classical principal component analyses. Thanks to the PARAFAC components, similarity in columns' function, chromatographic trend of drugs, and correlation between separation conditions could be simply depicted: columns were grouped according to their H-bonding forces, while gradient composition was dominating for condition classification. Also, the number of drugs could be efficiently reduced for columns classification as some of them exhibited a similar behavior, as shown by hierarchical clustering based on PARAFAC components. PMID:28695040

  14. Supercritical Fluid Chromatography of Drugs: Parallel Factor Analysis for Column Testing in a Wide Range of Operational Conditions

    Directory of Open Access Journals (Sweden)

    Ramia Z. Al Bakain

    2017-01-01

    Full Text Available Retention mechanisms involved in supercritical fluid chromatography (SFC are influenced by interdependent parameters (temperature, pressure, chemistry of the mobile phase, and nature of the stationary phase, a complexity which makes the selection of a proper stationary phase for a given separation a challenging step. For the first time in SFC studies, Parallel Factor Analysis (PARAFAC was employed to evaluate the chromatographic behavior of eight different stationary phases in a wide range of chromatographic conditions (temperature, pressure, and gradient elution composition. Design of Experiment was used to optimize experiments involving 14 pharmaceutical compounds present in biological and/or environmental samples and with dissimilar physicochemical properties. The results showed the superiority of PARAFAC for the analysis of the three-way (column × drug × condition data array over unfolding the multiway array to matrices and performing several classical principal component analyses. Thanks to the PARAFAC components, similarity in columns’ function, chromatographic trend of drugs, and correlation between separation conditions could be simply depicted: columns were grouped according to their H-bonding forces, while gradient composition was dominating for condition classification. Also, the number of drugs could be efficiently reduced for columns classification as some of them exhibited a similar behavior, as shown by hierarchical clustering based on PARAFAC components.

  15. [Application of excitation-emission matrix spectrum combined with parallel factor analysis in dissolved organic matter in East China Sea].

    Science.gov (United States)

    Lü, Li-Sha; Zhao, Wei-Hong; Miao, Hui

    2013-03-01

    Using excitation-emission matrix spectrum(EEMs) combined with parallel factor analysis (PARAFAC) examine the fluorescent components feature of dissolved organic matter (DOM) sampled from East China Sea in the summer and autumn was examined. The type, distribution and origin of the fluorescence dissolved organic matter were also discussed. Three fluorescent components were identified by PARAFAC, including protein-like component C1 (235, 280/330), terrestrial or marine humic-like component C2 (255, 330/400) and terrestrial humic-like component C3 (275, 360/480). The good linearity of the two humic-like components showed the same source or some relationship between the chemical constitutions. As a whole, the level of the fluorescence intensity in coastal ocean was higher than that of the open ocean in different water layers in two seasons. The relationship of three components with chlorophyll-a and salinity showed the DOM in the study area is almost not influenced by the living algal matter, but the fresh water outflow of the Yangtze River might be the source of them in the Yangtze River estuary in Summer. From what has been discussed above, we can draw the conclusion that the application of EEM-PARAFAC modeling will exert a profound influence upon the research of the dissolved organic matter.

  16. [Resolving excitation emission matrix spectroscopy of estuarine CDOM with parallel factor analysis and its application in organic pollution monitoring].

    Science.gov (United States)

    Guo, Wei-Dong; Huang, Jian-Ping; Hong, Hua-Sheng; Xu, Jing; Deng, Xun

    2010-06-01

    The distribution and estuarine behavior of fluorescent components of chromophoric dissolved organic matter (CDOM) from Jiulong Estuary were determined by fluorescence excitation emission matrix spectroscopy (EEMs) combined with parallel factor analysis (PARAFAC). The feasibility of these components as tracers for organic pollution in estuarine environments was also evaluated. Four separate fluorescent components were identified by PARAFAC, including three humic-like components (C1: 240, 310/382 nm; C2: 230, 250, 340/422 nm; C4: 260, 390/482 nm) and one protein-like components (C3: 225, 275/342 nm). These results indicated that UV humic-like peak A area designated by traditional "peak-picking method" was not a single peak but actually a combination of several fluorescent components, and it also had inherent links to so-called marine humic-like peak M or terrestrial humic-like peak C. Component C2 which include peak M decreased with increase of salinity in Jiulong Estuary, demonstrating that peak M can not be thought as the specific indicator of the "marine" humic-like component. Two humic-like components C1 and C2 showed additional behavior in the turbidity maximum region (salinity CDOM may provide a fast in-situ way to monitor the variation of the degree of organic pollution in estuarine environments.

  17. The relationship of chromophoric dissolved organic matter parallel factor analysis fluorescence and polycyclic aromatic hydrocarbons in natural surface waters.

    Science.gov (United States)

    Li, Sijia; Chen, Ya'nan; Zhang, Jiquan; Song, Kaishan; Mu, Guangyi; Sun, Caiyun; Ju, Hanyu; Ji, Meichen

    2018-01-01

    Polycyclic aromatic hydrocarbons (PAHs), a large group of persistent organic pollutants (POPs), have caused wide environmental pollution and ecological effects. Chromophoric dissolved organic matter (CDOM), which consists of complex compounds, was seen as a proxy of water quality. An attempt was made to understand the relationships of CDOM absorption parameters and parallel factor analysis (PARAFAC) components with PAHs under seasonal variation in the riverine, reservoir, and urban waters of the Yinma River watershed in 2016. These different types of water bodies provided wide CDOM and PAHs concentration ranges with CDOM absorption coefficients at a wavelength of 350 nm (a CDOM (350)) of 1.17-20.74 m -1 and total PAHs of 0-1829 ng/L. CDOM excitation-emission matrix (EEM) presented two fluorescent components, e.g., terrestrial humic-like (C1) and tryptophan-like (C2) were identified using PARAFAC. Tryptophan-like associated protein-like fluorescence often dominates the EEM signatures of sewage samples. Our finding is that seasonal CDOM EEM-PARAFAC and PAHs concentration showed consistent tendency indicated that PAHs were un-ignorable pollutants. However, the disparities in seasonal CDOM-PAH relationships relate to the similar sources of CDOM and PAHs, and the proportion of PAHs in CDOM. Overlooked and poorly appreciated, quantifying the relationship between CDOM and PAHs has important implications, because these results simplify ecological and health-based risk assessment of pollutants compared to the traditional chemical measurements.

  18. Characterizing fluorescent dissolved organic matter in a membrane bioreactor via excitation-emission matrix combined with parallel factor analysis.

    Science.gov (United States)

    Maqbool, Tahir; Quang, Viet Ly; Cho, Jinwoo; Hur, Jin

    2016-06-01

    In this study, we successfully tracked the dynamic changes in different constitutes of bound extracellular polymeric substances (bEPS), soluble microbial products (SMP), and permeate during the operation of bench scale membrane bioreactors (MBRs) via fluorescence excitation-emission matrix (EEM) combined with parallel factor analysis (PARAFAC). Three fluorescent groups were identified, including two protein-like (tryptophan-like C1 and tyrosine-like C2) and one microbial humic-like components (C3). In bEPS, protein-like components were consistently more dominant than C3 during the MBR operation, while their relative abundance in SMP depended on aeration intensities. C1 of bEPS exhibited a linear correlation (R(2)=0.738; pbEPS amounts in sludge, and C2 was closely related to the stability of sludge. The protein-like components were more greatly responsible for membrane fouling. Our study suggests that EEM-PARAFAC can be a promising monitoring tool to provide further insight into process evaluation and membrane fouling during MBR operation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Tracking senescence-induced patterns in leaf litter leachate using parallel factor analysis (PARAFAC) modeling and self-organizing maps

    Science.gov (United States)

    Wheeler, K. I.; Levia, D. F.; Hudson, J. E.

    2017-09-01

    In autumn, the dissolved organic matter (DOM) contribution of leaf litter leachate to streams in forested watersheds changes as trees undergo resorption, senescence, and leaf abscission. Despite its biogeochemical importance, little work has investigated how leaf litter leachate DOM changes throughout autumn and how any changes might differ interspecifically and intraspecifically. Since climate change is expected to cause vegetation migration, it is necessary to learn how changes in forest composition could affect DOM inputs via leaf litter leachate. We examined changes in leaf litter leachate fluorescent DOM (FDOM) from American beech (Fagus grandifolia Ehrh.) leaves in Maryland, Rhode Island, Vermont, and North Carolina and from yellow poplar (Liriodendron tulipifera L.) leaves from Maryland. FDOM in leachate samples was characterized by excitation-emission matrices (EEMs). A six-component parallel factor analysis (PARAFAC) model was created to identify components that accounted for the majority of the variation in the data set. Self-organizing maps (SOM) compared the PARAFAC component proportions of leachate samples. Phenophase and species exerted much stronger influence on the determination of a sample's SOM placement than geographic origin. As expected, FDOM from all trees transitioned from more protein-like components to more humic-like components with senescence. Percent greenness of sampled leaves and the proportion of tyrosine-like component 1 were found to be significantly different between the two genetic beech clusters, suggesting differences in photosynthesis and resorption. Our results highlight the need to account for interspecific and intraspecific variations in leaf litter leachate FDOM throughout autumn when examining the influence of allochthonous inputs to streams.

  20. 3-Way characterization of soils by Procrustes rotation, matrix-augmented principal components analysis and parallel factor analysis

    Czech Academy of Sciences Publication Activity Database

    Andrade, J.M.; Kubista, Mikael; Carlosena, A.; Prada, D.

    2007-01-01

    Roč. 603, č. 1 (2007), s. 20-29 ISSN 0003-2670 Institutional research plan: CEZ:AV0Z50520514 Keywords : PCA * heavy metals * soil Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.186, year: 2007

  1. Massive Asynchronous Parallelization of Sparse Matrix Factorizations

    Energy Technology Data Exchange (ETDEWEB)

    Chow, Edmond [Georgia Inst. of Technology, Atlanta, GA (United States)

    2018-01-08

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  2. Workspace Analysis for Parallel Robot

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2013-05-01

    Full Text Available As a completely new-type of robot, the parallel robot possesses a lot of advantages that the serial robot does not, such as high rigidity, great load-carrying capacity, small error, high precision, small self-weight/load ratio, good dynamic behavior and easy control, hence its range is extended in using domain. In order to find workspace of parallel mechanism, the numerical boundary-searching algorithm based on the reverse solution of kinematics and limitation of link length has been introduced. This paper analyses position workspace, orientation workspace of parallel robot of the six degrees of freedom. The result shows: It is a main means to increase and decrease its workspace to change the length of branch of parallel mechanism; The radius of the movement platform has no effect on the size of workspace, but will change position of workspace.

  3. Characterization of CDOM from urban waters in Northern-Northeastern China using excitation-emission matrix fluorescence and parallel factor analysis.

    Science.gov (United States)

    Zhao, Ying; Song, Kaishan; Li, Sijia; Ma, Jianhang; Wen, Zhidan

    2016-08-01

    Chromophoric dissolved organic matter (CDOM) plays an important role in aquatic systems, but high concentrations of organic materials are considered pollutants. The fluorescent component characteristics of CDOM in urban waters sampled from Northern and Northeastern China were examined by excitation-emission matrix fluorescence and parallel factor analysis (EEM-PARAFAC) to investigate the source and compositional changes of CDOM on both space and pollution levels. One humic-like (C1), one tryptophan-like component (C2), and one tyrosine-like component (C3) were identified by PARAFAC. Mean fluorescence intensities of the three CDOM components varied spatially and by pollution level in cities of Northern and Northeastern China during July-August, 2013 and 2014. Principal components analysis (PCA) was conducted to identify the relative distribution of all water samples. Cluster analysis (CA) was also used to categorize the samples into groups of similar pollution levels within a study area. Strong positive linear relationships were revealed between the CDOM absorption coefficients a(254) (R (2) = 0.89, p CDOM components can be applied to monitor water quality in real time compared to that of traditional approaches. These results demonstrate that EEM-PARAFAC is useful to evaluate the dynamics of CDOM fluorescent components in urban waters from Northern and Northeastern China and this method has potential applications for monitoring urban water quality in different regions with various hydrological conditions and pollution levels.

  4. Assessment on the leakage hazard of landfill leachate using three-dimensional excitation-emission fluorescence and parallel factor analysis method.

    Science.gov (United States)

    Pan, Hongwei; Lei, Hongjun; Liu, Xin; Wei, Huaibin; Liu, Shufang

    2017-09-01

    A large number of simple and informal landfills exist in developing countries, which pose as tremendous soil and groundwater pollution threats. Early warning and monitoring of landfill leachate pollution status is of great importance. However, there is a shortage of affordable and effective tools and methods. In this study, a soil column experiment was performed to simulate the pollution status of leachate using three-dimensional excitation-emission fluorescence (3D-EEMF) and parallel factor analysis (PARAFAC) models. Sum of squared residuals (SSR) and principal component analysis (PCA) were used to determine the optimal components for PARAFAC. A one-way analysis of variance showed that the component scores of the soil column leachate were significant influenced by landfill leachate (plandfill to that of natural soil could be used to evaluate the leakage status of landfill leachate. Furthermore, a hazard index (HI) and a hazard evaluation standard were established. A case study of Kaifeng landfill indicated a low hazard (level 5) by the use of HI. In summation, HI is presented as a tool to evaluate landfill pollution status and for the guidance of municipal solid waste management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Parallel-Sequential Texture Analysis

    NARCIS (Netherlands)

    van den Broek, Egon; Singh, Sameer; Singh, Maneesha; van Rikxoort, Eva M.; Apte, Chid; Perner, Petra

    2005-01-01

    Color induced texture analysis is explored, using two texture analysis techniques: the co-occurrence matrix and the color correlogram as well as color histograms. Several quantization schemes for six color spaces and the human-based 11 color quantization scheme have been applied. The VisTex texture

  6. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  7. Parallel single-cell analysis microfluidic platform

    NARCIS (Netherlands)

    van den Brink, Floris Teunis Gerardus; Gool, Elmar; Frimat, Jean-Philippe; Bomer, Johan G.; van den Berg, Albert; le Gac, Severine

    2011-01-01

    We report a PDMS microfluidic platform for parallel single-cell analysis (PaSCAl) as a powerful tool to decipher the heterogeneity found in cell populations. Cells are trapped individually in dedicated pockets, and thereafter, a number of invasive or non-invasive analysis schemes are performed.

  8. Seasonal characterization of CDOM for lakes in semiarid regions of Northeast China using excitation-emission matrix fluorescence and parallel factor analysis (EEM-PARAFAC)

    Science.gov (United States)

    Zhao, Ying; Song, Kaishan; Wen, Zhidan; Li, Lin; Zang, Shuying; Shao, Tiantian; Li, Sijia; Du, Jia

    2016-03-01

    The seasonal characteristics of fluorescent components in chromophoric dissolved organic matter (CDOM) for lakes in the semiarid region of Northeast China were examined by excitation-emission matrix (EEM) spectra and parallel factor analysis (PARAFAC). Two humic-like (C1 and C2) and protein-like (C3 and C4) components were identified using PARAFAC. The average fluorescence intensity of the four components differed under seasonal variation from June and August 2013 to February and April 2014. Components 1 and 2 exhibited a strong linear correlation (R2 = 0.628). Significantly positive linear relationships between CDOM absorption coefficients a(254) (R2 = 0.72, 0.46, p DOC). However, almost no obvious correlation was found between salinity and EEM-PARAFAC-extracted components except for C3 (R2 = 0.469). Results from this investigation demonstrate that the EEM-PARAFAC technique can be used to evaluate the seasonal dynamics of CDOM fluorescent components for inland waters in the semiarid regions of Northeast China, and to quantify CDOM components for other waters with similar environmental conditions.

  9. Detection of Copper (II) and Cadmium (II) binding to dissolved organic matter from macrophyte decomposition by fluorescence excitation-emission matrix spectra combined with parallel factor analysis

    International Nuclear Information System (INIS)

    Yuan, Dong-hai; Guo, Xu-jing; Wen, Li; He, Lian-sheng; Wang, Jing-gang; Li, Jun-qi

    2015-01-01

    Fluorescence excitation-emission matrix (EEM) spectra coupled with parallel factor analysis (PARAFAC) was used to characterize dissolved organic matter (DOM) derived from macrophyte decomposition, and to study its complexation with Cu (II) and Cd (II). Both the protein-like and the humic-like components showed a marked quenching effect by Cu (II). Negligible quenching effects were found for Cd (II) by components 1, 5 and 6. The stability constants and the fraction of the binding fluorophores for humic-like components and Cu (II) can be influenced by macrophyte decomposition of various weight gradients in aquatic plants. Macrophyte decomposition within the scope of the appropriate aquatic phytomass can maximize the stability constant of DOM-metal complexes. A large amount of organic matter was introduced into the aquatic environment by macrophyte decomposition, suggesting that the potential risk of DOM as a carrier of heavy metal contamination in macrophytic lakes should not be ignored. - Highlights: • Macrophyte decomposition increases fluorescent DOM components in the upper sediment. • Protein-like components are quenched or enhanced by adding Cu (II) and Cd (II). • Macrophyte decomposition DOM can impact the affinity of Cu (II) and Cd (II). • The log K M and f values showed a marked change due to macrophyte decomposition. • Macrophyte decomposition can maximize the stability constant of DOM-Cu (II) complexes. - Macrophyte decomposition DOM can influence on the binding affinity of metal ions in macrophytic lakes

  10. Heterogeneous adsorption behavior of landfill leachate on granular activated carbon revealed by fluorescence excitation emission matrix (EEM)-parallel factor analysis (PARAFAC).

    Science.gov (United States)

    Lee, Sonmin; Hur, Jin

    2016-04-01

    Heterogeneous adsorption behavior of landfill leachate on granular activated carbon (GAC) was investigated by fluorescence excitation-emission matrix (EEM) combined with parallel factor analysis (PARAFAC). The equilibrium adsorption of two leachates on GAC was well described by simple Langmuir and Freundlich isotherm models. More nonlinear isotherm and a slower adsorption rate were found for the leachate with the higher values of specific UV absorbance and humification index, suggesting that the leachate containing more aromatic content and condensed structures might have less accessible sites of GAC surface and a lower degree of diffusive adsorption. Such differences in the adsorption behavior were found even within the bulk leachate as revealed by the dissimilarity in the isotherm and kinetic model parameters between two identified PARAFAC components. For both leachates, terrestrial humic-like fluorescence (C1) component, which is likely associated with relatively large sized and condensed aromatic structures, exhibited a higher isotherm nonlinearity and a slower kinetic rate for GAC adsorption than microbial humic-like (C2) component. Our results were consistent with size exclusion effects, a well-known GAC adsorption mechanism. This study demonstrated the promising benefit of using EEM-PARAFAC for GAC adsorption processes of landfill leachate through fast monitoring of the influent and treated leachate, which can provide valuable information on optimizing treatment processes and predicting further environmental impacts of the treated effluent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A Comparative Study of the Application of Fluorescence Excitation-Emission Matrices Combined with Parallel Factor Analysis and Nonnegative Matrix Factorization in the Analysis of Zn Complexation by Humic Acids

    Directory of Open Access Journals (Sweden)

    Patrycja Boguta

    2016-10-01

    Full Text Available The main aim of this study was the application of excitation-emission fluorescence matrices (EEMs combined with two decomposition methods: parallel factor analysis (PARAFAC and nonnegative matrix factorization (NMF to study the interaction mechanisms between humic acids (HAs and Zn(II over a wide concentration range (0–50 mg·dm−3. The influence of HA properties on Zn(II complexation was also investigated. Stability constants, quenching degree and complexation capacity were estimated for binding sites found in raw EEM, EEM-PARAFAC and EEM-NMF data using mathematical models. A combination of EEM fluorescence analysis with one of the proposed decomposition methods enabled separation of overlapping binding sites and yielded more accurate calculations of the binding parameters. PARAFAC and NMF processing allowed finding binding sites invisible in a few raw EEM datasets as well as finding totally new maxima attributed to structures of the lowest humification. Decomposed data showed an increase in Zn complexation with an increase in humification, aromaticity and molecular weight of HAs. EEM-PARAFAC analysis also revealed that the most stable compounds were formed by structures containing the highest amounts of nitrogen. The content of oxygen-functional groups did not influence the binding parameters, mainly due to fact of higher competition of metal cation with protons. EEM spectra coupled with NMF and especially PARAFAC processing gave more adequate assessments of interactions as compared to raw EEM data and should be especially recommended for modeling of complexation processes where the fluorescence intensities (FI changes are weak or where the processes are interfered with by the presence of other fluorophores.

  12. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  13. Spatiotemporal Distribution, Sources, and Photobleaching Imprint of Dissolved Organic Matter in the Yangtze Estuary and Its Adjacent Sea Using Fluorescence and Parallel Factor Analysis

    Science.gov (United States)

    Li, Penghui; Chen, Ling; Zhang, Wen; Huang, Qinghui

    2015-01-01

    To investigate the seasonal and interannual dynamics of dissolved organic matter (DOM) in the Yangtze Estuary, surface and bottom water samples in the Yangtze Estuary and its adjacent sea were collected and characterized using fluorescence excitation-emission matrices (EEMs) and parallel factor analysis (PARAFAC) in both dry and wet seasons in 2012 and 2013. Two protein-like components and three humic-like components were identified. Three humic-like components decreased linearly with increasing salinity (r>0.90, p<0.001), suggesting their distribution could primarily be controlled by physical mixing. By contrast, two protein-like components fell below the theoretical mixing line, largely due to microbial degradation and removal during mixing. Higher concentrations of humic-like components found in 2012 could be attributed to higher freshwater discharge relative to 2013. There was a lack of systematic patterns for three humic-like components between seasons and years, probably due to variations of other factors such as sources and characteristics. Highest concentrations of fluorescent components, observed in estuarine turbidity maximum (ETM) region, could be attributed to sediment resuspension and subsequent release of DOM, supported by higher concentrations of fluorescent components in bottom water than in surface water at two stations where sediments probably resuspended. Meanwhile, photobleaching could be reflected from the changes in the ratios between fluorescence intensity (Fmax) of humic-like components and chromophoric DOM (CDOM) absorption coefficient (a355) along the salinity gradient. This study demonstrates the abundance and composition of DOM in estuaries are controlled not only by hydrological conditions, but also by its sources, characteristics and related estuarine biogeochemical processes. PMID:26107640

  14. Spatiotemporal Distribution, Sources, and Photobleaching Imprint of Dissolved Organic Matter in the Yangtze Estuary and Its Adjacent Sea Using Fluorescence and Parallel Factor Analysis.

    Directory of Open Access Journals (Sweden)

    Penghui Li

    Full Text Available To investigate the seasonal and interannual dynamics of dissolved organic matter (DOM in the Yangtze Estuary, surface and bottom water samples in the Yangtze Estuary and its adjacent sea were collected and characterized using fluorescence excitation-emission matrices (EEMs and parallel factor analysis (PARAFAC in both dry and wet seasons in 2012 and 2013. Two protein-like components and three humic-like components were identified. Three humic-like components decreased linearly with increasing salinity (r>0.90, p<0.001, suggesting their distribution could primarily be controlled by physical mixing. By contrast, two protein-like components fell below the theoretical mixing line, largely due to microbial degradation and removal during mixing. Higher concentrations of humic-like components found in 2012 could be attributed to higher freshwater discharge relative to 2013. There was a lack of systematic patterns for three humic-like components between seasons and years, probably due to variations of other factors such as sources and characteristics. Highest concentrations of fluorescent components, observed in estuarine turbidity maximum (ETM region, could be attributed to sediment resuspension and subsequent release of DOM, supported by higher concentrations of fluorescent components in bottom water than in surface water at two stations where sediments probably resuspended. Meanwhile, photobleaching could be reflected from the changes in the ratios between fluorescence intensity (Fmax of humic-like components and chromophoric DOM (CDOM absorption coefficient (a355 along the salinity gradient. This study demonstrates the abundance and composition of DOM in estuaries are controlled not only by hydrological conditions, but also by its sources, characteristics and related estuarine biogeochemical processes.

  15. Using parallel factor analysis modeling (PARAFAC) and self-organizing maps to track senescence-induced patterns in leaf litter leachate

    Science.gov (United States)

    Wheeler, K. I.; Levia, D. F., Jr.; Hudson, J. E.

    2017-12-01

    As trees undergo autumnal processes such as resorption, senescence, and leaf abscission, the dissolved organic matter (DOM) contribution of leaf litter leachate to streams changes. However, little research has investigated how the fluorescent DOM (FDOM) changes throughout the autumn and how this differs inter- and intraspecifically. Two of the major impacts of global climate change on forested ecosystems include altering phenology and causing forest community species and subspecies composition restructuring. We examined changes in FDOM in leachate from American beech (Fagus grandifolia Ehrh.) leaves in Maryland, Rhode Island, Vermont, and North Carolina and yellow poplar (Liriodendron tulipifera L.) leaves from Maryland throughout three different phenophases: green, senescing, and freshly abscissed. Beech leaves from Maryland and Rhode Island have previously been identified as belonging to the same distinct genetic cluster and beech trees from Vermont and the study site in North Carolina from the other. FDOM in samples was characterized using excitation-emission matrices (EEMs) and a six-component parallel factor analysis (PARAFAC) model was created to identify components. Self-organizing maps (SOMs) were used to visualize variation and patterns in the PARAFAC component proportions of the leachate samples. Phenophase and species had the greatest influence on determining where a sample mapped on the SOM when compared to genetic clusters and geographic origin. Throughout senescence, FDOM from all the trees transitioned from more protein-like components to more humic-like ones. Percent greenness of the sampled leaves and the proportion of the tyrosine-like component 1 were found to significantly differ between the two genetic beech clusters. This suggests possible differences in photosynthesis and resorption between the two genetic clusters of beech. The use of SOMs to visualize differences in patterns of senescence between the different species and genetic

  16. Chromophoric dissolved organic matter (CDOM) variability in Barataria Basin using excitation-emission matrix (EEM) fluorescence and parallel factor analysis (PARAFAC).

    Science.gov (United States)

    Singh, Shatrughan; D'Sa, Eurico J; Swenson, Erick M

    2010-07-15

    Chromophoric dissolved organic matter (CDOM) variability in Barataria Basin, Louisiana, USA,was examined by excitation emission matrix (EEM) fluorescence combined with parallel factor analysis (PARAFAC). CDOM optical properties of absorption and fluorescence at 355nm along an axial transect (36 stations) during March, April, and May 2008 showed an increasing trend from the marine end member to the upper basin with mean CDOM absorption of 11.06 + or - 5.01, 10.05 + or - 4.23, 11.67 + or - 6.03 (m(-)(1)) and fluorescence 0.80 + or - 0.37, 0.78 + or - 0.39, 0.75 + or - 0.51 (RU), respectively. PARAFAC analysis identified two terrestrial humic-like (component 1 and 2), one non-humic like (component 3), and one soil derived humic acid like (component 4) components. The spatial variation of the components showed an increasing trend from station 1 (near the mouth of basin) to station 36 (end member of bay; upper basin). Deviations from this increasing trend were observed at a bayou channel with very high chlorophyll-a concentrations especially for component 3 in May 2008 that suggested autochthonous production of CDOM. The variability of components with salinity indicated conservative mixing along the middle part of the transect. Component 1 and 4 were found to be relatively constant, while components 2 and 3 revealed an inverse relationship for the sampling period. Total organic carbon showed increasing trend for each of the components. An increase in humification and a decrease in fluorescence indices along the transect indicated an increase in terrestrial derived organic matter and reduced microbial activity from lower to upper basin. The use of these indices along with PARAFAC results improved dissolved organic matter characterization in the Barataria Basin. Copyright 2010 Elsevier B.V. All rights reserved.

  17. Parallel processor for fast event analysis

    International Nuclear Information System (INIS)

    Hensley, D.C.

    1983-01-01

    Current maximum data rates from the Spin Spectrometer of approx. 5000 events/s (up to 1.3 MBytes/s) and minimum analysis requiring at least 3000 operations/event require a CPU cycle time near 70 ns. In order to achieve an effective cycle time of 70 ns, a parallel processing device is proposed where up to 4 independent processors will be implemented in parallel. The individual processors are designed around the Am2910 Microsequencer, the AM29116 μP, and the Am29517 Multiplier. Satellite histogramming in a mass memory system will be managed by a commercial 16-bit μP system

  18. Parallel Integer Factorization Using Quadratic Forms

    National Research Council Canada - National Science Library

    McMath, Stephen S

    2005-01-01

    Factorization is important for both practical and theoretical reasons. In secure digital communication, security of the commonly used RSA public key cryptosystem depends on the difficulty of factoring large integers...

  19. Characterizing chromophoric dissolved organic matter in Lake Tianmuhu and its catchment basin using excitation-emission matrix fluorescence and parallel factor analysis.

    Science.gov (United States)

    Zhang, Yunlin; Yin, Yan; Feng, Longqing; Zhu, Guangwei; Shi, Zhiqiang; Liu, Xiaohan; Zhang, Yuanzhi

    2011-10-15

    Chromophoric dissolved organic matter (CDOM) is an important optically active substance that transports nutrients, heavy metals, and other pollutants from terrestrial to aquatic systems and is used as a measure of water quality. To investigate how the source and composition of CDOM changes in both space and time, we used chemical, spectroscopic, and fluorescence analyses to characterize CDOM in Lake Tianmuhu (a drinking water source) and its catchment in China. Parallel factor analysis (PARAFAC) identified three individual fluorophore moieties that were attributed to humic-like and protein-like materials in 224 water samples collected between December 2008 and September 2009. The upstream rivers contained significantly higher concentrations of CDOM than did the lake water (a(350) of 4.27±2.51 and 2.32±0.59 m(-1), respectively), indicating that the rivers carried a substantial load of organic matter to the lake. Of the three main rivers that flow into Lake Tianmuhu, the Pingqiao River brought in the most CDOM from the catchment to the lake. CDOM absorption and the microbial and terrestrial humic-like components, but not the protein-like component, were significantly higher in the wet season than in other seasons, indicating that the frequency of rainfall and runoff could significantly impact the quantity and quality of CDOM collected from the catchment. The different relationships between the maximum fluorescence intensities of the three PARAFAC components, CDOM absorption, and chemical oxygen demand (COD) concentration in riverine and lake water indicated the difference in the composition of CDOM between Lake Tianmuhu and the rivers that feed it. This study demonstrates the utility of combining excitation-emission matrix fluorescence and PARAFAC to study CDOM dynamics in inland waters. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Insight into the heterogeneous adsorption of humic acid fluorescent components on multi-walled carbon nanotubes by excitation-emission matrix and parallel factor analysis.

    Science.gov (United States)

    Yang, Chenghu; Liu, Yangzhi; Cen, Qiulin; Zhu, Yaxian; Zhang, Yong

    2018-02-01

    The heterogeneous adsorption behavior of commercial humic acid (HA) on pristine and functionalized multi-walled carbon nanotubes (MWCNTs) was investigated by fluorescence excitation-emission matrix and parallel factor (EEM- PARAFAC) analysis. The kinetics, isotherms, thermodynamics and mechanisms of adsorption of HA fluorescent components onto MWCNTs were the focus of the present study. Three humic-like fluorescent components were distinguished, including one carboxylic-like fluorophore C1 (λ ex /λ em = (250, 310) nm/428nm), and two phenolic-like fluorophores, C2 (λ ex /λ em = (300, 460) nm/552nm) and C3 (λ ex /λ em = (270, 375) nm/520nm). The Lagergren pseudo-second-order model can be used to describe the adsorption kinetics of the HA fluorescent components. In addition, both the Freundlich and Langmuir models can be suitably employed to describe the adsorption of the HA fluorescent components onto MWCNTs with significantly high correlation coefficients (R 2 > 0.94, Padsorption affinity (K d ) and nonlinear adsorption degree from the HA fluorescent components to MWCNTs was clearly observed. The adsorption mechanism suggested that the π-π electron donor-acceptor (EDA) interaction played an important role in the interaction between HA fluorescent components and the three MWCNTs. Furthermore, the values of the thermodynamic parameters, including the Gibbs free energy change (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°), showed that the adsorption of the HA fluorescent components on MWCNTs was spontaneous and exothermic. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Impact of Optimization and Parallelism on Factorization Speed of SIQS

    Directory of Open Access Journals (Sweden)

    Dominik Breitenbacher

    2016-06-01

    Full Text Available This paper examines optimization possibilities of Self-Initialization Quadratic Sieve (SIQS, which is enhanced version of Quadratic Sieve factorization method. SIQS is considered the second fastest factorization method at all and the fastest one for numbers shorter than 100 decimal digits, respectively. Although, SIQS is the fastest method up to 100 decimal digits, it cannot be effectively utilized to work in polynomial time. Therefore, it is desirable to look for options how to speed up the method as much as possible. Two feasible ways of achieving it are code optimization and parallelism. Both of them are utilized in this paper. The goal of this paper is to show how it is possible to take advantage of parallelism in SIQS as well as reach a large speed-up thanks to detailed source code analysis with optimization. Our implementation process consists of two phases. In the first phase, the complete serial algorithm is implemented in the simplest way which does not consider any requirements for execution speed. The solution from the first phase serves as the reference implementation for further experiments. An improvement of factorization speed is performed in the second phase of the SIQS implementation, where we use the method of iterative modifications in order to examine contribution of each proposed step. The final optimized version of the SIQS implementation has achieved over 200x speed-up.

  2. Parallel Integer Factorization Using Quadratic Forms

    National Research Council Canada - National Science Library

    McMath, Stephen S

    2005-01-01

    .... In 1975, Daniel Shanks used class group infrastructure to modify the Morrison-Brillhart algorithm and develop Square Forms Factorization, but he never published his work on this algorithm or provided...

  3. Parallel interactive data analysis with PROOF

    International Nuclear Information System (INIS)

    Ballintijn, Maarten; Biskup, Marek; Brun, Rene; Canal, Philippe; Feichtinger, Derek; Ganis, Gerardo; Kickinger, Guenter; Peters, Andreas; Rademakers, Fons

    2006-01-01

    The Parallel ROOT Facility, PROOF, enables the analysis of much larger data sets on a shorter time scale. It exploits the inherent parallelism in data of uncorrelated events via a multi-tier architecture that optimizes I/O and CPU utilization in heterogeneous clusters with distributed storage. The system provides transparent and interactive access to gigabytes today. Being part of the ROOT framework PROOF inherits the benefits of a performant object storage system and a wealth of statistical and visualization tools. This paper describes the data analysis model of ROOT and the latest developments on closer integration of PROOF into that model and the ROOT user environment, e.g. support for PROOF-based browsing of trees stored remotely, and the popular TTree::Draw() interface. We also outline the ongoing developments aimed to improve the flexibility and user-friendliness of the system

  4. Impact analysis on a massively parallel computer

    International Nuclear Information System (INIS)

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  5. Seasonal characterization of CDOM for lakes in semi-arid regions of Northeast China using excitation-emission matrices fluorescence and parallel factor analysis (EEM-PARAFAC)

    Science.gov (United States)

    Zhao, Y.; Song, K.; Wen, Z.; Li, L.; Zang, S.; Shao, T.; Li, S.; Du, J.

    2015-04-01

    The seasonal characteristics of fluorescence components in CDOM for lakes in the semi-arid region of Northeast China were examined by excitation-emission matrices fluorescence and parallel factor analysis (EEM-PARAFAC). Two humic-like peaks C1 (Ex/Em = 230, 300/425 nm) and C2 (Ex/Em = 255, 350/460 nm) and two protein-like B (Ex/Em = 220, 275/320 nm) and T (Ex/Em = 225, 290/360 nm) peaks were identified using PARAFAC. The average fluorescence intensity of the four components differed with seasonal variation from June and August 2013 to February and April 2014. The total fluorescence intensity significantly varied from 2.54 ± 0.68 nm-1 in June to the mean value 1.93 ± 0.70 nm-1 in August 2013, and then increased to 2.34 ± 0.92 nm-1 in February and reduced to the lowest 1.57 ± 0.55 nm-1 in April 2014. In general, the fluorescence intensity was dominated by peak C1, indicating that most part of CDOM for inland waters being investigated in this study was originated from phytoplankton degradation. The lowest C2 represents only a small portion of CDOM from terrestrial imported organic matter to water bodies through rainwash and soil leaching. The two protein-like intensities (B and T) formed in situ through microbial activity have almost the same intensity. Especially, in August 2013 and February 2014, the two protein-like peaks showed obviously difference from other seasons and the highest C1 (1.02 nm-1) was present in February 2014. Components 1 and 2 exhibited strong linear correlation (R2 = 0.633). There were significantly positive linear relationships between CDOM absorption coefficients a(254) (R2 = 0.72, 0.46, p DOC. However, almost no obvious correlation was found between salinity and EEM-PARAFAC extracted components except for C3 (R2 = 0.469). Results from this investigation demonstrate that the EEM-PARAFAC technique can be used to evaluate the seasonal dynamics of CDOM fluorescence components for inland waters in semi-arid regions of Northeast China.

  6. [Resolving characteristic of CDOM by excitation-emission matrix spectroscopy combined with parallel factor analysis in the seawater of outer Yangtze Estuary in Autumn in 2010].

    Science.gov (United States)

    Yan, Li-Hong; Chen, Xue-Jun; Su, Rong-Guo; Han, Xiu-Rong; Zhang, Chuan-Song; Shi, Xiao-Yong

    2013-01-01

    The distribution and estuarine behavior of fluorescent components of chromophoric dissolved organic matter in the seawater of outer Yangtze Estuary were determined by fluorescence excitation emission matrix spectra combined with parallel factor analysis. Six individual fluorescent components were identified by PARAFAC models, including three terrestrial humic-like components C1 [330 nm/390(430) nm], C2 (390 nm/480 nm), C3 (360 nm/440 nm), marine biological production component C5 (300 nm/400 nm) and protein-like components C4 (290 nm/350 nm) and C6 (275 nm/300 nm). The results indicated that C1, C2, and C3 showed a conservative mixing behavior in the whole estuarine region, especially in high-salinity region. And the fluorescence intensity proportion of C1 and C3 decreased with increase of salinity and fluorescence intensity proportion of C2 kept constant with increase of salinity in the whole estuarine region. While C4 showed conservative mixing behavior in low-salinity region and non-conservative mixing behavior in high-salinity region, and fluorescence intensity proportion of C4 increased with increase of salinity. However, C5 and C6 showed a non-conservative mixing behavior and fluorescence intensity proportion increased with increase of salinity in high-salinity region. Significantly spatial difference was recorded for CDOM absorption coefficient in the coastal region and in the open water areas with the highest value in coastal region and the lowest value in the open water areas. The scope of absorption coefficient and absorption slope was higher in coastal region than that in the open water areas. Significantly positive correlations were found between CDOM absorption coefficient and the fluorescence intensities of C1, C2, C3, and C4, but no significant correlation was found between C5 and C6, suggesting that the river inputs contributed to the coastal areas, while CDOM in the open water areas was affected by terrestrial inputs and phytoplankton degradation.

  7. [Characterizing chromophoric dissolved organic matter (CDOM) in Lake Honghu, Lake Donghu and Lake Liangzihu using excitation-emission matrices (EEMs) fluorescence and parallel factor analysis (PARAFAC)].

    Science.gov (United States)

    Zhou, Yong-Qiang; Zhang, Yun-Lin; Niu, Cheng; Wang, Ming-Zhu

    2013-12-01

    Little is known about DOM characteristics in medium to large sized lakes located in the middle and lower reaches of Yangtze River, like Lake Honghu, Lake Donghu and Lake Liangzihu. Absorption, fluorescence and composition characteristics of chromophoric dissolved organic matter (CDOM) are presented using the absorption spectroscopy, the excitation-emission ma trices (EEMs) fluorescence and parallel factor analysis (PARAFAC) model based on the data collected in Sep-Oct. 2007 including 15, 9 and 10 samplings in Lake Honghu, Lake Donghu and Lake Liangzihu, respectively. CDOM absorption coefficient at 350 nm a(350) coefficient in Lake Honghu was significantly higher than those in Lake Donghu and Lake Liangzihu (t-test, pCDOM spectral slope in the wavelength range of 280-500 nm (S280-500) and a(350) (R2 =0. 781, p<0. 001). The mean value of S280-500 in Lake Honghu was significantly lower than those in Lake Donghu (t-test, p

  8. Parallel workflow for high-throughput (>1,000 samples/day quantitative analysis of human insulin-like growth factor 1 using mass spectrometric immunoassay.

    Directory of Open Access Journals (Sweden)

    Paul E Oran

    Full Text Available Insulin-like growth factor 1 (IGF1 is an important biomarker for the management of growth hormone disorders. Recently there has been rising interest in deploying mass spectrometric (MS methods of detection for measuring IGF1. However, widespread clinical adoption of any MS-based IGF1 assay will require increased throughput and speed to justify the costs of analyses, and robust industrial platforms that are reproducible across laboratories. Presented here is an MS-based quantitative IGF1 assay with performance rating of >1,000 samples/day, and a capability of quantifying IGF1 point mutations and posttranslational modifications. The throughput of the IGF1 mass spectrometric immunoassay (MSIA benefited from a simplified sample preparation step, IGF1 immunocapture in a tip format, and high-throughput MALDI-TOF MS analysis. The Limit of Detection and Limit of Quantification of the resulting assay were 1.5 μg/L and 5 μg/L, respectively, with intra- and inter-assay precision CVs of less than 10%, and good linearity and recovery characteristics. The IGF1 MSIA was benchmarked against commercially available IGF1 ELISA via Bland-Altman method comparison test, resulting in a slight positive bias of 16%. The IGF1 MSIA was employed in an optimized parallel workflow utilizing two pipetting robots and MALDI-TOF-MS instruments synced into one-hour phases of sample preparation, extraction and MSIA pipette tip elution, MS data collection, and data processing. Using this workflow, high-throughput IGF1 quantification of 1,054 human samples was achieved in approximately 9 hours. This rate of assaying is a significant improvement over existing MS-based IGF1 assays, and is on par with that of the enzyme-based immunoassays. Furthermore, a mutation was detected in ∼1% of the samples (SNP: rs17884626, creating an A→T substitution at position 67 of the IGF1, demonstrating the capability of IGF1 MSIA to detect point mutations and posttranslational modifications.

  9. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  10. Vacuum Large Current Parallel Transfer Numerical Analysis

    Directory of Open Access Journals (Sweden)

    Enyuan Dong

    2014-01-01

    Full Text Available The stable operation and reliable breaking of large generator current are a difficult problem in power system. It can be solved successfully by the parallel interrupters and proper timing sequence with phase-control technology, in which the strategy of breaker’s control is decided by the time of both the first-opening phase and second-opening phase. The precise transfer current’s model can provide the proper timing sequence to break the generator circuit breaker. By analysis of the transfer current’s experiments and data, the real vacuum arc resistance and precise correctional model in the large transfer current’s process are obtained in this paper. The transfer time calculated by the correctional model of transfer current is very close to the actual transfer time. It can provide guidance for planning proper timing sequence and breaking the vacuum generator circuit breaker with the parallel interrupters.

  11. Exploiting fine-grain parallelism in recursive LU factorization

    KAUST Repository

    Dongarra, Jack; Faverge, Mathieu; Ltaief, Hatem; Luszczek, Piotr R.

    2012-01-01

    is the panel factorization due to its memory-bound characteristic and the atomicity of selecting the appropriate pivots. We remedy this in our new approach to LU factorization of (narrow and tall) panel submatrices. We use a parallel fine-grained recursive

  12. [Characterization of Chromophoric dissolved organic matter (CDOM) in Zhoushan fishery using excitation-emission matrix spectroscopy (EEMs) and parallel factor analysis (PARAFAC)].

    Science.gov (United States)

    Zhou, Qian-qian; Su, Rong-guo; Bai, Ying; Zhang, Chuan-song; Shi, Xiao-yong

    2015-01-01

    The composition, distribution characteristics and sources of chromophoric dissolved organic matter(CDOM) in Zhoushan Fishery in spring were evaluated by fluorescence excitation-emission matrix (EEM) combined with parallel factor analysis (EEMs-PARAFAC). Three humic-like components [C1 (330/420 nm)], C2 [(290) 365/440 nm] and C3 [(260) 370/490 nm)] and two protein-like components [C4(285/340 nm) and C5 (270/310 nm)] were identified by EEMs-PARAFAC. The horizontal distribution patterns of the five components were almost the same with only slight differences, showing decreasing trends with increasing distance from shore. In the surface and middle layers, the high value areas were located in the north of Hangzhou Bay estuary and the outlet of Xiazhimen channel, and the former's was higher in the surface layer while the latter's was higher in the middle layer. In the bottom layer, CDOM decreased gradiently from the inshore to offshore, with higher CDOM near Zhoushan Island. The distributions of fluorescence components showed an opposite trend with salinity, and no significant linear relationship with Chl-a concentration was found, which indicated that CDOM in the surface and middle layers were dominated by terrestrial input and human activities of Zhoushan Island and that of the bottom layer was attribute to human activities of Zhoushan Island. The vertical distribution of five fluorescent components along 30.5 degrees N transect showed a decreasing trend from the surface and middle layers to bottom layer with high values in inshore and offshore areas, which were correlated with the lower salinity and higher Chl-a concentration, respectively. On this transect, CDOM was mainly affected by Yangtze River input in coastal area but by bioactivities in offshore waters. Along the 30 degrees N transect, the vertical distribution patterns of CDOM were similar to those of 30.5 degrees N transect but there was a high value area in the bottom layer near the shore, attributing to

  13. Exploiting fine-grain parallelism in recursive LU factorization

    KAUST Repository

    Dongarra, Jack

    2012-01-01

    The LU factorization is an important numerical algorithm for solving system of linear equations. This paper proposes a novel approach for computing the LU factorization in parallel on multicore architectures. It improves the overall performance and also achieves the numerical quality of the standard LU factorization with partial pivoting. While the update of the trailing submatrix is computationally intensive and highly parallel, the inherently problematic portion of the LU factorization is the panel factorization due to its memory-bound characteristic and the atomicity of selecting the appropriate pivots. We remedy this in our new approach to LU factorization of (narrow and tall) panel submatrices. We use a parallel fine-grained recursive formulation of the factorization. It is based on conflict-free partitioning of the data and lock-less synchronization mechanisms. Our implementation lets the overall computation naturally flow with limited contention. Our recursive panel factorization provides the necessary performance increase for the inherently problematic portion of the LU factorization of square matrices. A large panel width results in larger Amdahl\\'s fraction as our experiments have revealed which is consistent with related efforts. The performance results of our implementation reveal superlinear speedup and far exceed what can be achieved with equivalent MKL and/or LAPACK routines. © 2012 The authors and IOS Press. All rights reserved.

  14. Linear stability analysis of heated parallel channels

    International Nuclear Information System (INIS)

    Nourbakhsh, H.P.; Isbin, H.S.

    1982-01-01

    An analyis is presented of thermal hydraulic stability of flow in parallel channels covering the range from inlet subcooling to exit superheat. The model is based on a one-dimensional drift velocity formulation of the two phase flow conservation equations. The system of equations is linearized by assuming small disturbances about the steady state. The dynamic response of the system to an inlet flow perturbation is derived yielding the characteristic equation which predicts the onset of instabilities. A specific application is carried out for homogeneous and regional uniformly heated systems. The particular case of equal characteristic frequencies of two-phase and single phase vapor region is studied in detail. The D-partition method and the Mikhailov stability criterion are used for determining the marginal stability boundary. Stability predictions from the present analysis are compared with the experimental data from the solar test facility. 8 references

  15. DEA Sensitivity Analysis for Parallel Production Systems

    Directory of Open Access Journals (Sweden)

    J. Gerami

    2011-06-01

    Full Text Available In this paper, we introduce systems consisting of several production units, each of which include several subunits working in parallel. Meanwhile, each subunit is working independently. The input and output of each production unit are the sums of the inputs and outputs of its subunits, respectively. We consider each of these subunits as an independent decision making unit(DMU and create the production possibility set(PPS produced by these DMUs, in which the frontier points are considered as efficient DMUs. Then we introduce models for obtaining the efficiency of the production subunits. Using super-efficiency models, we categorize all efficient subunits into different efficiency classes. Then we follow by presenting the sensitivity analysis and stability problem for efficient subunits, including extreme efficient and non-extreme efficient subunits, assuming simultaneous perturbations in all inputs and outputs of subunits such that the efficiency of the subunit under evaluation declines while the efficiencies of other subunits improve.

  16. Optimization of headspace experimental factors to determine chlorophenols in water by means of headspace solid-phase microextraction and gas chromatography coupled with mass spectrometry and parallel factor analysis.

    Science.gov (United States)

    Morales, Rocío; Cruz Ortiz, M; Sarabia, Luis A

    2012-11-19

    In this work an analytical procedure based on headspace solid-phase microextraction and gas chromatography coupled with mass spectrometry (HS-SPME-GC/MS) is proposed to determine chlorophenols with prior derivatization step to improve analyte volatility and therefore the decision limit (CCα). After optimization, the analytical procedure was applied to analyze river water samples. The following analytes are studied: 2,4-dichlorophenol (2,4-DCP), 2,4,6-trichlorophenol (2,4,6-TrCP), 2,3,4,6-tetrachlorophenol (2,4,6-TeCP) and pentachlorophenol (PCP). A D-optimal design is used to study the parameters affecting the HS-SPME process and the derivatization step. Four experimental factors at two levels and one factor at three levels were considered: (i) equilibrium/extraction temperature, (ii) extraction time, (iii) sample volume, (iv) agitation time and (v) equilibrium time. In addition two interactions between four of them were considered. The D-optimal design enables the reduction of the number of experiments from 48 to 18 while maintaining enough precision in the estimation of the effects. As every analysis took 1h, the design is blocked in 2 days. The second-order property of the PARAFAC (parallel factor analysis) decomposition avoids the need of fitting a new calibration model each time that the experimental conditions change. In consequence, the standardized loadings in the sample mode estimated by a PARAFAC decomposition are the response used in the design because they are proportional to the amount of analyte extracted. It has been found that block effect is significant and that 60°C equilibrium temperature together with 25min extraction time are necessary to achieve the best extraction for the chlorophenols analyzed. The other factors and interactions were not significant. After that, a calibration based in a PARAFAC2 decomposition provided the following values of CCα: 120, 208, 86, 39ngL(-1) for 2,4-DCP, 2,4,6-TrCP, 2,3,4,5-TeCP and PCP respectively for a

  17. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  18. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    Science.gov (United States)

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-09

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems

  20. Analysis of a parallel multigrid algorithm

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1989-01-01

    The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.

  1. An interactive parallel processor for data analysis

    International Nuclear Information System (INIS)

    Mong, J.; Logan, D.; Maples, C.; Rathbun, W.; Weaver, D.

    1984-01-01

    A parallel array of eight minicomputers has been assembled in an attempt to deal with kiloparameter data events. By exporting computer system functions to a separate processor, the authors have been able to achieve computer amplification linearly proportional to the number of executing processors

  2. Discrete Hadamard transformation algorithm's parallelism analysis and achievement

    Science.gov (United States)

    Hu, Hui

    2009-07-01

    With respect to Discrete Hadamard Transformation (DHT) wide application in real-time signal processing while limitation in operation speed of DSP. The article makes DHT parallel research and its parallel performance analysis. Based on multiprocessor platform-TMS320C80 programming structure, the research is carried out to achieve two kinds of parallel DHT algorithms. Several experiments demonstrated the effectiveness of the proposed algorithms.

  3. Parallelization of the Physical-Space Statistical Analysis System (PSAS)

    Science.gov (United States)

    Larson, J. W.; Guo, J.; Lyster, P. M.

    1999-01-01

    Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational

  4. Development and validation of a method for the determination of regulated fragrance allergens by High-Performance Liquid Chromatography and Parallel Factor Analysis 2.

    Science.gov (United States)

    Pérez-Outeiral, Jessica; Elcoroaristizabal, Saioa; Amigo, Jose Manuel; Vidal, Maider

    2017-12-01

    This work presents the development and validation of a multivariate method for quantitation of 6 potentially allergenic substances (PAS) related to fragrances by ultrasound-assisted emulsification microextraction coupled with HPLC-DAD and PARAFAC2 in the presence of other 18 PAS. The objective is the extension of a previously proposed univariate method to be able to determine the 24 PAS currently considered as allergens. The suitability of the multivariate approach for the qualitative and quantitative analysis of the analytes is discussed through datasets of increasing complexity, comprising the assessment and validation of the method performance. PARAFAC2 showed to adequately model the data facing up different instrumental and chemical issues, such as co-elution profiles, overlapping spectra, unknown interfering compounds, retention time shifts and baseline drifts. Satisfactory quality parameters of the model performance were obtained (R 2 ≥0.94), as well as meaningful chromatographic and spectral profiles (r≥0.97). Moreover, low errors of prediction in external validation standards (below 15% in most cases) as well as acceptable quantification errors in real spiked samples (recoveries from 82 to 119%) confirmed the suitability of PARAFAC2 for resolution and quantification of the PAS. The combination of the previously proposed univariate approach, for the well-resolved peaks, with the developed multivariate method allows the determination of the 24 regulated PAS. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Parallelization for X-ray crystal structural analysis program

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, Hiroshi [Japan Atomic Energy Research Inst., Tokyo (Japan); Minami, Masayuki; Yamamoto, Akiji

    1997-10-01

    In this report we study vectorization and parallelization for X-ray crystal structural analysis program. The target machine is NEC SX-4 which is a distributed/shared memory type vector parallel supercomputer. X-ray crystal structural analysis is surveyed, and a new multi-dimensional discrete Fourier transform method is proposed. The new method is designed to have a very long vector length, so that it enables to obtain the 12.0 times higher performance result that the original code. Besides the above-mentioned vectorization, the parallelization by micro-task functions on SX-4 reaches 13.7 times acceleration in the part of multi-dimensional discrete Fourier transform with 14 CPUs, and 3.0 times acceleration in the whole program. Totally 35.9 times acceleration to the original 1CPU scalar version is achieved with vectorization and parallelization on SX-4. (author)

  6. Event analysis using a massively parallel processor

    International Nuclear Information System (INIS)

    Bale, A.; Gerelle, E.; Messersmith, J.; Warren, R.; Hoek, J.

    1990-01-01

    This paper describes a system for performing histogramming of n-tuple data at interactive rates using a commercial SIMD processor array connected to a work-station running the well-known Physics Analysis Workstation software (PAW). Results indicate that an order of magnitude performance improvement over current RISC technology is easily achievable

  7. A supercomputer for parallel data analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    The project of a powerful multiprocessor system is proposed. The main purpose of the project is to develop a low cost computer system with a processing rate of a few tens of millions of operations per second. The system solves many problems of data analysis from high-energy physics spectrometers. It includes about 70 MOTOROLA-68020 based powerful slave microprocessor boards liaisoned through the VME crates to a host VAX micro computer. Each single microprocessor board performs the same algorithm requiring large computing time. The host computer distributes data over the microprocessor board, collects and combines obtained results. The architecture of the system easily allows one to use it in the real time mode

  8. Design and Transmission Analysis of an Asymmetrical Spherical Parallel Manipulator

    DEFF Research Database (Denmark)

    Wu, Guanglei; Caro, Stéphane; Wang, Jiawei

    2015-01-01

    analysis and optimal design of the proposed manipulator based on its kinematic analysis. The input and output transmission indices of the manipulator are defined for its optimum design based on the virtual coefficient between the transmission wrenches and twist screws. The sets of optimal parameters......This paper presents an asymmetrical spherical parallel manipulator and its transmissibility analysis. This manipulator contains a center shaft to both generate a decoupled unlimited-torsion motion and support the mobile platform for high positioning accuracy. This work addresses the transmission...... are identified and the distribution of the transmission index is visualized. Moreover, a comparative study regarding to the performances with the symmetrical spherical parallel manipulators is conducted and the comparison shows the advantages of the proposed manipulator with respect to its spherical parallel...

  9. A parallel solution for high resolution histological image analysis.

    Science.gov (United States)

    Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J

    2012-10-01

    This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    Science.gov (United States)

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  11. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  12. Analysis and implementation of LLC-T series parallel resonant ...

    African Journals Online (AJOL)

    A prototype 300 W, 100 kHz converter is designed and built to experimentally demonstrate, dynamic and steady state performance for the LLC-T series parallel resonant converter. A comparative study is performed between experimental results and the simulation studies. The analysis shows that the output of converter is ...

  13. Kinematic Analysis and Performance Evaluation of Novel PRS Parallel Mechanism

    Science.gov (United States)

    Balaji, K.; Khan, B. Shahul Hamid

    2018-02-01

    In this paper, a 3 DoF (Degree of Freedom) novel PRS (Prismatic-Revolute- Spherical) type parallel mechanisms has been designed and presented. The combination of striaght and arc type linkages for 3 DOF parallel mechanism is introduced for the first time. The performances of the mechanisms are evaluated based on the indices such as Minimum Singular Value (MSV), Condition Number (CN), Local Conditioning Index (LCI), Kinematic Configuration Index (KCI) and Global Conditioning Index (GCI). The overall reachable workspace of all mechanisms are presented. The kinematic measure, dexterity measure and workspace analysis for all the mechanism have been evaluated and compared.

  14. State-plane analysis of parallel resonant converter

    Science.gov (United States)

    Oruganti, R.; Lee, F. C.

    1985-01-01

    A method for analyzing the complex operation of a parallel resonant converter is developed, utilizing graphical state-plane techniques. The comprehensive mode analysis uncovers, for the first time, the presence of other complex modes besides the continuous conduction mode and the discontinuous conduction mode and determines their theoretical boundaries. Based on the insight gained from the analysis, a novel, high-frequency resonant buck converter is proposed. The voltage conversion ratio of the new converter is almost independent of load.

  15. Kinematic analysis of parallel manipulators by algebraic screw theory

    CERN Document Server

    Gallardo-Alvarado, Jaime

    2016-01-01

    This book reviews the fundamentals of screw theory concerned with velocity analysis of rigid-bodies, confirmed with detailed and explicit proofs. The author additionally investigates acceleration, jerk, and hyper-jerk analyses of rigid-bodies following the trend of the velocity analysis. With the material provided in this book, readers can extend the theory of screws into the kinematics of optional order of rigid-bodies. Illustrative examples and exercises to reinforce learning are provided. Of particular note, the kinematics of emblematic parallel manipulators, such as the Delta robot as well as the original Gough and Stewart platforms are revisited applying, in addition to the theory of screws, new methods devoted to simplify the corresponding forward-displacement analysis, a challenging task for most parallel manipulators. Stands as the only book devoted to the acceleration, jerk and hyper-jerk (snap) analyses of rigid-body by means of screw theory; Provides new strategies to simplify the forward kinematic...

  16. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    Science.gov (United States)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  17. Framework for Interactive Parallel Dataset Analysis on the Grid

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, David A.; Ananthan, Balamurali; /Tech-X Corp.; Johnson, Tony; Serbo, Victor; /SLAC

    2007-01-10

    We present a framework for use at a typical Grid site to facilitate custom interactive parallel dataset analysis targeting terabyte-scale datasets of the type typically produced by large multi-institutional science experiments. We summarize the needs for interactive analysis and show a prototype solution that satisfies those needs. The solution consists of desktop client tool and a set of Web Services that allow scientists to sign onto a Grid site, compose analysis script code to carry out physics analysis on datasets, distribute the code and datasets to worker nodes, collect the results back to the client, and to construct professional-quality visualizations of the results.

  18. Parallel algorithms for nuclear reactor analysis via domain decomposition method

    International Nuclear Information System (INIS)

    Kim, Yong Hee

    1995-02-01

    In this thesis, the neutron diffusion equation in reactor physics is discretized by the finite difference method and is solved on a parallel computer network which is composed of T-800 transputers. T-800 transputer is a message-passing type MIMD (multiple instruction streams and multiple data streams) architecture. A parallel variant of Schwarz alternating procedure for overlapping subdomains is developed with domain decomposition. The thesis provides convergence analysis and improvement of the convergence of the algorithm. The convergence of the parallel Schwarz algorithms with DN(or ND), DD, NN, and mixed pseudo-boundary conditions(a weighted combination of Dirichlet and Neumann conditions) is analyzed for both continuous and discrete models in two-subdomain case and various underlying features are explored. The analysis shows that the convergence rate of the algorithm highly depends on the pseudo-boundary conditions and the theoretically best one is the mixed boundary conditions(MM conditions). Also it is shown that there may exist a significant discrepancy between continuous model analysis and discrete model analysis. In order to accelerate the convergence of the parallel Schwarz algorithm, relaxation in pseudo-boundary conditions is introduced and the convergence analysis of the algorithm for two-subdomain case is carried out. The analysis shows that under-relaxation of the pseudo-boundary conditions accelerates the convergence of the parallel Schwarz algorithm if the convergence rate without relaxation is negative, and any relaxation(under or over) decelerates convergence if the convergence rate without relaxation is positive. Numerical implementation of the parallel Schwarz algorithm on an MIMD system requires multi-level iterations: two levels for fixed source problems, three levels for eigenvalue problems. Performance of the algorithm turns out to be very sensitive to the iteration strategy. In general, multi-level iterations provide good performance when

  19. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    Energy Technology Data Exchange (ETDEWEB)

    2016-08-22

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for $\\WW$ and $\\HH$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $\\WW$ and $\\HH$ within the alternating iterations.

  20. Locality-Driven Parallel Static Analysis for Power Delivery Networks

    KAUST Repository

    Zeng, Zhiyu

    2011-06-01

    Large VLSI on-chip Power Delivery Networks (PDNs) are challenging to analyze due to the sheer network complexity. In this article, a novel parallel partitioning-based PDN analysis approach is presented. We use the boundary circuit responses of each partition to divide the full grid simulation problem into a set of independent subgrid simulation problems. Instead of solving exact boundary circuit responses, a more efficient scheme is proposed to provide near-exact approximation to the boundary circuit responses by exploiting the spatial locality of the flip-chip-type power grids. This scheme is also used in a block-based iterative error reduction process to achieve fast convergence. Detailed computational cost analysis and performance modeling is carried out to determine the optimal (or near-optimal) number of partitions for parallel implementation. Through the analysis of several large power grids, the proposed approach is shown to have excellent parallel efficiency, fast convergence, and favorable scalability. Our approach can solve a 16-million-node power grid in 18 seconds on an IBM p5-575 processing node with 16 Power5+ processors, which is 18.8X faster than a state-of-the-art direct solver. © 2011 ACM.

  1. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  2. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor.

    Science.gov (United States)

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-09-15

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor.

  3. Data-Parallel Mesh Connected Components Labeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  4. Optical escape factors for Doppler profiles in spherical, cylindrical and plane parallel geometries

    International Nuclear Information System (INIS)

    Otsuka, Masamoto.

    1977-12-01

    Optical escape factors for Doppler profiles in spherical, cylindrical and plane parallel geometries are tabulated over the range of optical depths from 10 -3 to 10 5 . Relations with the known formulae are discussed also. (auth.)

  5. Analysis of series resonant converter with series-parallel connection

    Science.gov (United States)

    Lin, Bor-Ren; Huang, Chien-Lan

    2011-02-01

    In this study, a parallel inductor-inductor-capacitor (LLC) resonant converter series-connected on the primary side and parallel-connected on the secondary side is presented for server power supply systems. Based on series resonant behaviour, the power metal-oxide-semiconductor field-effect transistors are turned on at zero voltage switching and the rectifier diodes are turned off at zero current switching. Thus, the switching losses on the power semiconductors are reduced. In the proposed converter, the primary windings of the two LLC converters are connected in series. Thus, the two converters have the same primary currents to ensure that they can supply the balance load current. On the output side, two LLC converters are connected in parallel to share the load current and to reduce the current stress on the secondary windings and the rectifier diodes. In this article, the principle of operation, steady-state analysis and design considerations of the proposed converter are provided and discussed. Experiments with a laboratory prototype with a 24 V/21 A output for server power supply were performed to verify the effectiveness of the proposed converter.

  6. Block-Parallel Data Analysis with DIY2

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, Dmitriy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Peterka, Tom [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-08-30

    DIY2 is a programming model and runtime for block-parallel analytics on distributed-memory machines. Its main abstraction is block-structured data parallelism: data are decomposed into blocks; blocks are assigned to processing elements (processes or threads); computation is described as iterations over these blocks, and communication between blocks is defined by reusable patterns. By expressing computation in this general form, the DIY2 runtime is free to optimize the movement of blocks between slow and fast memories (disk and flash vs. DRAM) and to concurrently execute blocks residing in memory with multiple threads. This enables the same program to execute in-core, out-of-core, serial, parallel, single-threaded, multithreaded, or combinations thereof. This paper describes the implementation of the main features of the DIY2 programming model and optimizations to improve performance. DIY2 is evaluated on benchmark test cases to establish baseline performance for several common patterns and on larger complete analysis codes running on large-scale HPC machines.

  7. Analysis of Retransmission Policies for Parallel Data Transmission

    Directory of Open Access Journals (Sweden)

    I. A. Halepoto

    2018-06-01

    Full Text Available Stream control transmission protocol (SCTP is a transport layer protocol, which is efficient, reliable, and connection-oriented as compared to transmission control protocol (TCP and user datagram protocol (UDP. Additionally, SCTP has more innovative features like multihoming, multistreaming and unordered delivery. With multihoming, SCTP establishes multiple paths between a sender and receiver. However, it only uses the primary path for data transmission and the secondary path (or paths for fault tolerance. Concurrent multipath transfer extension of SCTP (CMT-SCTP allows a sender to transmit data in parallel over multiple paths, which increases the overall transmission throughput. Parallel data transmission is beneficial for higher data rates. Parallel transmission or connection is also good in services such as video streaming where if one connection is occupied with errors the transmission continues on alternate links. With parallel transmission, the unordered data packets arrival is very common at receiver. The receiver has to wait until the missing data packets arrive, causing performance degradation while using CMT-SCTP. In order to reduce the transmission delay at the receiver, CMT-SCTP uses intelligent retransmission polices to immediately retransmit the missing packets. The retransmission policies used by CMT-SCTP are RTX-SSTHRESH, RTX-LOSSRATE and RTX-CWND. The main objective of this paper is the performance analysis of the retransmission policies. This paper evaluates RTX-SSTHRESH, RTX-LOSSRATE and RTX-CWND. Simulations are performed on the Network Simulator 2. In the simulations with various scenarios and parameters, it is observed that the RTX-LOSSRATE is a suitable policy.

  8. Effective damping for SSR analysis of parallel turbine-generators

    International Nuclear Information System (INIS)

    Agrawal, B.L.; Farmer, R.G.

    1988-01-01

    Damping is a dominant parameter in studies to determine SSR problem severity and countermeasure requirements. To reach valid conclusions for multi-unit plants, it is essential that the net effective damping of unequally loaded units be known. For the Palo Verde Nuclear Generating Station, extensive testing and analysis have been performed to verify and develop an accurate means of determining the effective damping of unequally loaded units in parallel. This has led to a unique and simple algorithm which correlates well with two other analytic techniques

  9. A dataflow analysis tool for parallel processing of algorithms

    Science.gov (United States)

    Jones, Robert L., III

    1993-01-01

    A graph-theoretic design process and software tool is presented for selecting a multiprocessing scheduling solution for a class of computational problems. The problems of interest are those that can be described using a dataflow graph and are intended to be executed repetitively on a set of identical parallel processors. Typical applications include signal processing and control law problems. Graph analysis techniques are introduced and shown to effectively determine performance bounds, scheduling constraints, and resource requirements. The software tool is shown to facilitate the application of the design process to a given problem.

  10. Parallelized preconditioned model building algorithm for matrix factorization

    OpenAIRE

    Kaya, Kamer; Birbil, İlker; Birbil, Ilker; Öztürk, Mehmet Kaan; Ozturk, Mehmet Kaan; Gohari, Amir

    2017-01-01

    Matrix factorization is a common task underlying several machine learning applications such as recommender systems, topic modeling, or compressed sensing. Given a large and possibly sparse matrix A, we seek two smaller matrices W and H such that their product is as close to A as possible. The objective is minimizing the sum of square errors in the approximation. Typically such problems involve hundreds of thousands of unknowns, so an optimizer must be exceptionally efficient. In this study, a...

  11. Correlation analysis of respiratory signals by using parallel coordinate plots.

    Science.gov (United States)

    Saatci, Esra

    2018-01-01

    The understanding of the bonds and the relationships between the respiratory signals, i.e. the airflow, the mouth pressure, the relative temperature and the relative humidity during breathing may provide the improvement on the measurement methods of respiratory mechanics and sensor designs or the exploration of the several possible applications in the analysis of respiratory disorders. Therefore, the main objective of this study was to propose a new combination of methods in order to determine the relationship between respiratory signals as a multidimensional data. In order to reveal the coupling between the processes two very different methods were used: the well-known statistical correlation analysis (i.e. Pearson's correlation and cross-correlation coefficient) and parallel coordinate plots (PCPs). Curve bundling with the number intersections for the correlation analysis, Least Mean Square Time Delay Estimator (LMS-TDE) for the point delay detection and visual metrics for the recognition of the visual structures were proposed and utilized in PCP. The number of intersections was increased when the correlation coefficient changed from high positive to high negative correlation between the respiratory signals, especially if whole breath was processed. LMS-TDE coefficients plotted in PCP indicated well-matched point delay results to the findings in the correlation analysis. Visual inspection of PCB by visual metrics showed range, dispersions, entropy comparisons and linear and sinusoidal-like relationships between the respiratory signals. It is demonstrated that the basic correlation analysis together with the parallel coordinate plots perceptually motivates the visual metrics in the display and thus can be considered as an aid to the user analysis by providing meaningful views of the data. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Performance Analysis of Parallel Mathematical Subroutine library PARCEL

    International Nuclear Information System (INIS)

    Yamada, Susumu; Shimizu, Futoshi; Kobayashi, Kenichi; Kaburaki, Hideo; Kishida, Norio

    2000-01-01

    The parallel mathematical subroutine library PARCEL (Parallel Computing Elements) has been developed by Japan Atomic Energy Research Institute for easy use of typical parallelized mathematical codes in any application problems on distributed parallel computers. The PARCEL includes routines for linear equations, eigenvalue problems, pseudo-random number generation, and fast Fourier transforms. It is shown that the results of performance for linear equations routines exhibit good parallelization efficiency on vector, as well as scalar, parallel computers. A comparison of the efficiency results with the PETSc (Portable Extensible Tool kit for Scientific Computations) library has been reported. (author)

  13. Dictionary Learning Based on Nonnegative Matrix Factorization Using Parallel Coordinate Descent

    Directory of Open Access Journals (Sweden)

    Zunyi Tang

    2013-01-01

    Full Text Available Sparse representation of signals via an overcomplete dictionary has recently received much attention as it has produced promising results in various applications. Since the nonnegativities of the signals and the dictionary are required in some applications, for example, multispectral data analysis, the conventional dictionary learning methods imposed simply with nonnegativity may become inapplicable. In this paper, we propose a novel method for learning a nonnegative, overcomplete dictionary for such a case. This is accomplished by posing the sparse representation of nonnegative signals as a problem of nonnegative matrix factorization (NMF with a sparsity constraint. By employing the coordinate descent strategy for optimization and extending it to multivariable case for processing in parallel, we develop a so-called parallel coordinate descent dictionary learning (PCDDL algorithm, which is structured by iteratively solving the two optimal problems, the learning process of the dictionary and the estimating process of the coefficients for constructing the signals. Numerical experiments demonstrate that the proposed algorithm performs better than the conventional nonnegative K-SVD (NN-KSVD algorithm and several other algorithms for comparison. What is more, its computational consumption is remarkably lower than that of the compared algorithms.

  14. Microprocessor event analysis in parallel with Camac data acquisition

    International Nuclear Information System (INIS)

    Cords, D.; Eichler, R.; Riege, H.

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a Camac System (GEC-ELLIOTT System Crate) and shares the Camac access with a Nord-1OS computer. Interfaces have been designed and tested for execution of Camac cycles, communication with the Nord-1OS computer and DMA-transfer from Camac to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-1OS computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the result of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-1OS buffer will be reset and the event omitted from further processing. (orig.)

  15. Microprocessor event analysis in parallel with CAMAC data acquisition

    CERN Document Server

    Cords, D; Riege, H

    1981-01-01

    The Plessey MIPROC-16 microprocessor (16 bits, 250 ns execution time) has been connected to a CAMAC System (GEC-ELLIOTT System Crate) and shares the CAMAC access with a Nord-10S computer. Interfaces have been designed and tested for execution of CAMAC cycles, communication with the Nord-10S computer and DMA-transfer from CAMAC to the MIPROC-16 memory. The system is used in the JADE data-acquisition-system at PETRA where it receives the data from the detector in parallel with the Nord-10S computer via DMA through the indirect-data-channel mode. The microprocessor performs an on-line analysis of events and the results of various checks is appended to the event. In case of spurious triggers or clear beam gas events, the Nord-10S buffer will be reset and the event omitted from further processing. (5 refs).

  16. Parallel Wavefront Analysis for a 4D Interferometer

    Science.gov (United States)

    Rao, Shanti R.

    2011-01-01

    This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.

  17. Analysis of Parallel Burn Without Crossfeed TSTO RLV Architectures and Comparison to Parallel Burn With Crossfeed and Series Burn Architectures

    Science.gov (United States)

    Smith, Garrett; Phillips, Alan

    2002-01-01

    There are currently three dominant TSTO class architectures. These are Series Burn (SB), Parallel Burn with crossfeed (PBw/cf), and Parallel Burn without crossfeed (PBncf). The goal of this study was to determine what factors uniquely affect PBncf architectures, how each of these factors interact, and to determine from a performance perspective whether a PBncf vehicle could be competitive with a PBw/cf or SB vehicle using equivalent technology and assumptions. In all cases, performance was evaluated on a relative basis for a fixed payload and mission by comparing gross and dry vehicle masses of a closed vehicle. Propellant combinations studied were LOX: LH2 propelled orbiter and booster (HH) and LOX: Kerosene booster with LOX: LH2 orbiter (KH). The study conclusions were: 1) a PBncf orbiter should be throttled as deeply as possible after launch until the staging point. 2) a detailed structural model is essential to accurate architecture analysis and evaluation. 3) a PBncf TSTO architecture is feasible for systems that stage at mach 7. 3a) HH architectures can achieve a mass growth relative to PBw/cf of ratio and to the position of the orbiter required to align the nozzle heights at liftoff. 5 ) thrust to weight ratios of 1.3 at liftoff and between 1.0 and 0.9 when staging at mach 7 appear to be close to ideal for PBncf vehicles. 6) performance for all vehicles studied is better when staged at mach 7 instead of mach 5. The study showed that a Series Burn architecture has the lowest gross mass for HH cases, and has the lowest dry mass for KH cases. The potential disadvantages of SB are the required use of an air-start for the orbiter engines and potential CG control issues. A Parallel Burn with crossfeed architecture solves both these problems, but the mechanics of a large bipropellant crossfeed system pose significant technical difficulties. Parallel Burn without crossfeed vehicles start both booster and orbiter engines on the ground and thus avoid both the risk of

  18. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    Science.gov (United States)

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  19. Physics Structure Analysis of Parallel Waves Concept of Physics Teacher Candidate

    International Nuclear Information System (INIS)

    Sarwi, S; Linuwih, S; Supardi, K I

    2017-01-01

    The aim of this research was to find a parallel structure concept of wave physics and the factors that influence on the formation of parallel conceptions of physics teacher candidates. The method used qualitative research which types of cross-sectional design. These subjects were five of the third semester of basic physics and six of the fifth semester of wave course students. Data collection techniques used think aloud and written tests. Quantitative data were analysed with descriptive technique-percentage. The data analysis technique for belief and be aware of answers uses an explanatory analysis. Results of the research include: 1) the structure of the concept can be displayed through the illustration of a map containing the theoretical core, supplements the theory and phenomena that occur daily; 2) the trend of parallel conception of wave physics have been identified on the stationary waves, resonance of the sound and the propagation of transverse electromagnetic waves; 3) the influence on the parallel conception that reading textbooks less comprehensive and knowledge is partial understanding as forming the structure of the theory. (paper)

  20. A new scheduling algorithm for parallel sparse LU factorization with static pivoting

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2002-08-20

    In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.

  1. A Parallel Software Pipeline for DMET Microarray Genotyping Data Analysis

    Directory of Open Access Journals (Sweden)

    Giuseppe Agapito

    2018-06-01

    Full Text Available Personalized medicine is an aspect of the P4 medicine (predictive, preventive, personalized and participatory based precisely on the customization of all medical characters of each subject. In personalized medicine, the development of medical treatments and drugs is tailored to the individual characteristics and needs of each subject, according to the study of diseases at different scales from genotype to phenotype scale. To make concrete the goal of personalized medicine, it is necessary to employ high-throughput methodologies such as Next Generation Sequencing (NGS, Genome-Wide Association Studies (GWAS, Mass Spectrometry or Microarrays, that are able to investigate a single disease from a broader perspective. A side effect of high-throughput methodologies is the massive amount of data produced for each single experiment, that poses several challenges (e.g., high execution time and required memory to bioinformatic software. Thus a main requirement of modern bioinformatic softwares, is the use of good software engineering methods and efficient programming techniques, able to face those challenges, that include the use of parallel programming and efficient and compact data structures. This paper presents the design and the experimentation of a comprehensive software pipeline, named microPipe, for the preprocessing, annotation and analysis of microarray-based Single Nucleotide Polymorphism (SNP genotyping data. A use case in pharmacogenomics is presented. The main advantages of using microPipe are: the reduction of errors that may happen when trying to make data compatible among different tools; the possibility to analyze in parallel huge datasets; the easy annotation and integration of data. microPipe is available under Creative Commons license, and is freely downloadable for academic and not-for-profit institutions.

  2. Analysis and Design of High-Order Parallel Resonant Converters

    Science.gov (United States)

    Batarseh, Issa Eid

    1990-01-01

    In this thesis, a special state variable transformation technique has been derived for the analysis of high order dc-to-dc resonant converters. Converters comprised of high order resonant tanks have the advantage of utilizing the parasitic elements by making them part of the resonant tank. A new set of state variables is defined in order to make use of two-dimensional state-plane diagrams in the analysis of high order converters. Such a method has been successfully used for the analysis of the conventional Parallel Resonant Converters (PRC). Consequently, two -dimensional state-plane diagrams are used to analyze the steady state response for third and fourth order PRC's when these converters are operated in the continuous conduction mode. Based on this analysis, a set of control characteristic curves for the LCC-, LLC- and LLCC-type PRC are presented from which various converter design parameters are obtained. Various design curves for component value selections and device ratings are given. This analysis of high order resonant converters shows that the addition of the reactive components to the resonant tank results in converters with better performance characteristics when compared with the conventional second order PRC. Complete design procedure along with design examples for 2nd, 3rd and 4th order converters are presented. Practical power supply units, normally used for computer applications, were built and tested by using the LCC-, LLC- and LLCC-type commutation schemes. In addition, computer simulation results are presented for these converters in order to verify the theoretical results.

  3. A parallel implementation of 3D Zernike moment analysis

    Science.gov (United States)

    Berjón, Daniel; Arnaldo, Sergio; Morán, Francisco

    2011-01-01

    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system.

  4. One Factor or Two Parallel Processes? Comorbidity and Development of Adolescent Anxiety and Depressive Disorder Symptoms

    Science.gov (United States)

    Hale, William W., III; Raaijmakers, Quinten A. W.; Muris, Peter; van Hoof, Anne; Meeus, Wim H. J.

    2009-01-01

    Background: This study investigates whether anxiety and depressive disorder symptoms of adolescents from the general community are best described by a model that assumes they are indicative of one general factor or by a model that assumes they are two distinct disorders with parallel growth processes. Additional analyses were conducted to explore…

  5. Regional-scale calculation of the LS factor using parallel processing

    Science.gov (United States)

    Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong

    2015-05-01

    With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.

  6. Parallel multiple instance learning for extremely large histopathology image analysis.

    Science.gov (United States)

    Xu, Yan; Li, Yeshu; Shen, Zhengyang; Wu, Ziwei; Gao, Teng; Fan, Yubo; Lai, Maode; Chang, Eric I-Chao

    2017-08-03

    Histopathology images are critical for medical diagnosis, e.g., cancer and its treatment. A standard histopathology slice can be easily scanned at a high resolution of, say, 200,000×200,000 pixels. These high resolution images can make most existing imaging processing tools infeasible or less effective when operated on a single machine with limited memory, disk space and computing power. In this paper, we propose an algorithm tackling this new emerging "big data" problem utilizing parallel computing on High-Performance-Computing (HPC) clusters. Experimental results on a large-scale data set (1318 images at a scale of 10 billion pixels each) demonstrate the efficiency and effectiveness of the proposed algorithm for low-latency real-time applications. The framework proposed an effective and efficient system for extremely large histopathology image analysis. It is based on the multiple instance learning formulation for weakly-supervised learning for image classification, segmentation and clustering. When a max-margin concept is adopted for different clusters, we obtain further improvement in clustering performance.

  7. Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.

  8. Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis

    Directory of Open Access Journals (Sweden)

    H.X. Lin

    2004-01-01

    Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.

  9. Parallel imaging: is GRAPPA a useful acquisition tool for MR imaging intended for volumetric brain analysis?

    Directory of Open Access Journals (Sweden)

    Frank Anders

    2009-08-01

    Full Text Available Abstract Background The work presented here investigates parallel imaging applied to T1-weighted high resolution imaging for use in longitudinal volumetric clinical studies involving Alzheimer's disease (AD and Mild Cognitive Impairment (MCI patients. This was in an effort to shorten acquisition times to minimise the risk of motion artefacts caused by patient discomfort and disorientation. The principle question is, "Can parallel imaging be used to acquire images at 1.5 T of sufficient quality to allow volumetric analysis of patient brains?" Methods Optimisation studies were performed on a young healthy volunteer and the selected protocol (including the use of two different parallel imaging acceleration factors was then tested on a cohort of 15 elderly volunteers including MCI and AD patients. In addition to automatic brain segmentation, hippocampus volumes were manually outlined and measured in all patients. The 15 patients were scanned on a second occasion approximately one week later using the same protocol and evaluated in the same manner to test repeatability of measurement using images acquired with the GRAPPA parallel imaging technique applied to the MPRAGE sequence. Results Intraclass correlation tests show that almost perfect agreement between repeated measurements of both segmented brain parenchyma fraction and regional measurement of hippocampi. The protocol is suitable for both global and regional volumetric measurement dementia patients. Conclusion In summary, these results indicate that parallel imaging can be used without detrimental effect to brain tissue segmentation and volumetric measurement and should be considered for both clinical and research studies where longitudinal measurements of brain tissue volumes are of interest.

  10. Measurement and analysis on dynamic behaviour of parallel-plate assembly in nuclear reactors

    International Nuclear Information System (INIS)

    Chen Junjie; Guo Changqing; Zou Changchuan

    1997-01-01

    Measurement and analysis on dynamic behaviour of parallel-plate assembly in nuclear reactors have been explored. The electromagnetic method, a new method of measuring and analysing dynamic behaviour with the parallel-plate assembly as the structure of multi-parallel-beams joining with single-beam, has been presented. Theoretical analysis and computation results of dry-modal natural frequencies show good agreement with experimental measurements

  11. Factor analysis and scintigraphy

    International Nuclear Information System (INIS)

    Di Paola, R.; Penel, C.; Bazin, J.P.; Berche, C.

    1976-01-01

    The goal of factor analysis is usually to achieve reduction of a large set of data, extracting essential features without previous hypothesis. Due to the development of computerized systems, the use of largest sampling, the possibility of sequential data acquisition and the increase of dynamic studies, the problem of data compression can be encountered now in routine. Thus, results obtained for compression of scintigraphic images were first presented. Then possibilities given by factor analysis for scan processing were discussed. At last, use of this analysis for multidimensional studies and specially dynamic studies were considered for compression and processing [fr

  12. Kinematics analysis and simulation of a new underactuated parallel robot

    Directory of Open Access Journals (Sweden)

    Wenxu YAN

    2017-04-01

    Full Text Available The number of degrees of freedom is equal to the number of the traditional robot driving motors, which causes defects such as low efficiency. To overcome that problem, based on the traditional parallel robot, a new underactuated parallel robot is presented. The structure characteristics and working principles of the underactuated parallel robot are analyzed. The forward and inverse solutions are derived by way of space analytic geometry and vector algebra. The kinematics model is established, and MATLAB is implied to verify the accuracy of forward and inverse solutions and identify the optimal work space. The simulation results show that the robot can realize the function of robot switch with three or four degrees of freedom when the number of driving motors is three, improving the efficiency of robot grasping, with the characteristics of large working space, high speed operation, high positioning accuracy, low manufacturing cost and so on, and it will have a wide range of industrial applications.

  13. Basic design of parallel computational program for probabilistic structural analysis

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Arai, Taketoshi; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for 'development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory' (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  14. Basic design of parallel computational program for probabilistic structural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kaji, Yoshiyuki; Arai, Taketoshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Gu, Wenwei; Nakamura, Hitoshi

    1999-06-01

    In our laboratory, for `development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory` (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)

  15. Resonance analysis in parallel voltage-controlled Distributed Generation inverters

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Blaabjerg, Frede; Chen, Zhe

    2013-01-01

    Thanks to the fast responses of the inner voltage and current control loops, the dynamic behaviors of parallel voltage-controlled Distributed Generation (DG) inverters not only relies on the stability of load sharing among them, but subjects to the interactions between the voltage control loops...

  16. Analytical and experimental analysis of a parallel leaf spring guidance

    NARCIS (Netherlands)

    Meijaard, Jacob Philippus; Brouwer, Dannis Michel; Jonker, Jan B.; Denier, J.; Finn, M.

    2008-01-01

    A parallel leaf spring guidance is defined as a benchmark problem for flexible multibody formalisms and codes. The mechanism is loaded by forces and an additional moment or misalignment. Buckling loads, changes in compliance and frequencies, and large-amplitude vibrations are calculated. A

  17. Capacity Analysis for Parallel Runway through Agent-Based Simulation

    Directory of Open Access Journals (Sweden)

    Yang Peng

    2013-01-01

    Full Text Available Parallel runway is the mainstream structure of China hub airport, runway is often the bottleneck of an airport, and the evaluation of its capacity is of great importance to airport management. This study outlines a model, multiagent architecture, implementation approach, and software prototype of a simulation system for evaluating runway capacity. Agent Unified Modeling Language (AUML is applied to illustrate the inbound and departing procedure of planes and design the agent-based model. The model is evaluated experimentally, and the quality is studied in comparison with models, created by SIMMOD and Arena. The results seem to be highly efficient, so the method can be applied to parallel runway capacity evaluation and the model propose favorable flexibility and extensibility.

  18. Kinematic Analysis and Optimization of a New Compliant Parallel Micromanipulator

    Directory of Open Access Journals (Sweden)

    Qingsong Xu

    2008-11-01

    Full Text Available In this paper, a new three translational degrees of freedom (DOF compliant parallel micromanipulator (CPM is proposed, which has an excellent accuracy of parallel mechanisms with flexure hinges. The system is established by a proper selection of hardware and analyzed via the derived pseudo-rigid-body model. In view of the physical constraints imposed by both the piezoelectric actuators and flexure hinges, the CPM's reachable workspace is determined analytically, where a maximum cylinder defined as an usable workspace can be inscribed. Moreover, the optimal design of the CPM with the consideration of the usable workspace size and global dexterity index simultaneously is carried out by utilizing the approaches of direct search method, genetic algorithm (GA, and particle swarm optimization (PSO, respectively. The simulation results show that the PSO is the best method for the optimization, and the results are valuable in the design of a new micromanipulator.

  19. A parallel implementation of 3D Zernike moment analysis

    OpenAIRE

    Berjón Díez, Daniel; Arnaldo Duart, Sergio; Morán Burgos, Francisco

    2011-01-01

    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3...

  20. Parallel database search and prime factorization with magnonic holographic memory devices

    Energy Technology Data Exchange (ETDEWEB)

    Khitun, Alexander [Electrical and Computer Engineering Department, University of California - Riverside, Riverside, California 92521 (United States)

    2015-12-28

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  1. Parallel database search and prime factorization with magnonic holographic memory devices

    Science.gov (United States)

    Khitun, Alexander

    2015-12-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed.

  2. Parallel database search and prime factorization with magnonic holographic memory devices

    International Nuclear Information System (INIS)

    Khitun, Alexander

    2015-01-01

    In this work, we describe the capabilities of Magnonic Holographic Memory (MHM) for parallel database search and prime factorization. MHM is a type of holographic device, which utilizes spin waves for data transfer and processing. Its operation is based on the correlation between the phases and the amplitudes of the input spin waves and the output inductive voltage. The input of MHM is provided by the phased array of spin wave generating elements allowing the producing of phase patterns of an arbitrary form. The latter makes it possible to code logic states into the phases of propagating waves and exploit wave superposition for parallel data processing. We present the results of numerical modeling illustrating parallel database search and prime factorization. The results of numerical simulations on the database search are in agreement with the available experimental data. The use of classical wave interference may results in a significant speedup over the conventional digital logic circuits in special task data processing (e.g., √n in database search). Potentially, magnonic holographic devices can be implemented as complementary logic units to digital processors. Physical limitations and technological constrains of the spin wave approach are also discussed

  3. Absorbed dose calibration factors for parallel-plate chambers in high energy photon beams

    International Nuclear Information System (INIS)

    McEwen, M.R.; Duane, S.; Thomas, R.A.S.

    2002-01-01

    An investigation was carried out into the performance of parallel-plate chambers in 60 Co and MV photon beams. The aim was to derive calibration factors, investigate chamber-to-chamber variability and provide much-needed information on the use of parallel-plate chambers in high-energy X-ray beams. A set of NE2561/NE2611 reference chambers, calibrated against the primary standard graphite calorimeter is used for the dissemination of absorbed dose to water. The parallel-plate chambers were calibrated by comparison with the NPL reference chambers in a water phantom. Two types of parallel-plate chamber were investigated - the NACP -02 and Roos and measurements were made at 60 C0 and 6 linac photon energies (6-19 MV). Calibration factors were derived together with polarity corrections. The standard uncertainty in the calibration of a chamber in terms of absorbed dose to water is estimated to be ±0.75%. The results of the polarity measurements were somewhat confusing. One would expect the correction to be small and previous measurements in electron beams have indicated that there is little variation between chambers of these types. However, some chambers gave unexpectedly large polarity corrections, up to 0.8%. By contrast the measured polarity correction for a NE2611 chamber was less than 0.13% at all energies. The reason for these large polarity corrections is not clear, but experimental error and linac variations have been ruled out. By combining the calibration data for the different chambers it was possible to obtain experimental k Q factors for the two chamber types. It would appear from the data that the variations between chambers of the same type are random and one can therefore define a generic curve for each chamber type. These are presented in Figure 1, together with equivalent data for two cylindrical chamber types - NE2561/NE2611 and NE2571. As can be seen, there is a clear difference between the curves for the cylindrical chambers and those for the

  4. Evaluating parallel relational databases for medical data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; Wilson, Andrew T.

    2012-03-01

    Hospitals have always generated and consumed large amounts of data concerning patients, treatment and outcomes. As computers and networks have permeated the hospital environment it has become feasible to collect and organize all of this data. This raises naturally the question of how to deal with the resulting mountain of information. In this report we detail a proof-of-concept test using two commercially available parallel database systems to analyze a set of real, de-identified medical records. We examine database scalability as data sizes increase as well as responsiveness under load from multiple users.

  5. Routing performance analysis and optimization within a massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  6. Analysis and Design of Embedded Controlled Parallel Resonant Converter

    Directory of Open Access Journals (Sweden)

    P. CHANDRASEKHAR

    2009-07-01

    Full Text Available Microcontroller based constant frequency controlled full bridge LC parallel resonant converter is presented in this paper for electrolyser application. An electrolyser is a part of renewable energy system which generates hydrogen from water electrolysis. The DC power required by the electrolyser system is supplied by the DC-DC converter. Owing to operation under constant frequency, the filter designs are simplified and utilization of magnetic components is improved. This converter has advantages like high power density, low EMI and reduced switching stresses. DC-DC converter system is simulated using MATLAB, Simulink. Detailed simulation results are presented. The simulation results are compared with the experimental results.

  7. Parallel imports of hospital pharmaceuticals: An empirical analysis of price effects from parallel imports and the design of procurement procedures in the Danish hospital sector

    OpenAIRE

    Hostenkamp, Gisela; Kronborg, Christian; Arendt, Jacob Nielsen

    2012-01-01

    We analyse pharmaceutical imports in the Danish hospital sector. In this market medicines are publicly tendered using first-price sealed-bid procurement auctions. We analyse whether parallel imports have an effect on pharmaceutical prices and whether the way tenders were organised matters for the competitive effect of parallel imports on prices. Our theoretical analysis shows that the design of the procurement rules affects both market structure and pharmaceutical prices. Parallel imports may...

  8. Study of talcum charging status in parallel plate electrostatic separator based on particle trajectory analysis

    Science.gov (United States)

    Yunxiao, CAO; Zhiqiang, WANG; Jinjun, WANG; Guofeng, LI

    2018-05-01

    Electrostatic separation has been extensively used in mineral processing, and has the potential to separate gangue minerals from raw talcum ore. As for electrostatic separation, the particle charging status is one of important influence factors. To describe the talcum particle charging status in a parallel plate electrostatic separator accurately, this paper proposes a modern images processing method. Based on the actual trajectories obtained from sequence images of particle movement and the analysis of physical forces applied on a charged particle, a numerical model is built, which could calculate the charge-to-mass ratios represented as the charging status of particle and simulate the particle trajectories. The simulated trajectories agree well with the experimental results obtained by images processing. In addition, chemical composition analysis is employed to reveal the relationship between ferrum gangue mineral content and charge-to-mass ratios. Research results show that the proposed method is effective for describing the particle charging status in electrostatic separation.

  9. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    Science.gov (United States)

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  10. A comparative critical analysis of modern task-parallel runtimes.

    Energy Technology Data Exchange (ETDEWEB)

    Wheeler, Kyle Bruce; Stark, Dylan; Murphy, Richard C.

    2012-12-01

    The rise in node-level parallelism has increased interest in task-based parallel runtimes for a wide array of application areas. Applications have a wide variety of task spawning patterns which frequently change during the course of application execution, based on the algorithm or solver kernel in use. Task scheduling and load balance regimes, however, are often highly optimized for specific patterns. This paper uses four basic task spawning patterns to quantify the impact of specific scheduling policy decisions on execution time. We compare the behavior of six publicly available tasking runtimes: Intel Cilk, Intel Threading Building Blocks (TBB), Intel OpenMP, GCC OpenMP, Qthreads, and High Performance ParalleX (HPX). With the exception of Qthreads, the runtimes prove to have schedulers that are highly sensitive to application structure. No runtime is able to provide the best performance in all cases, and those that do provide the best performance in some cases, unfortunately, provide extremely poor performance when application structure does not match the schedulers assumptions.

  11. Parallel Dynamic Analysis of a Large-Scale Water Conveyance Tunnel under Seismic Excitation Using ALE Finite-Element Method

    Directory of Open Access Journals (Sweden)

    Xiaoqing Wang

    2016-01-01

    Full Text Available Parallel analyses about the dynamic responses of a large-scale water conveyance tunnel under seismic excitation are presented in this paper. A full three-dimensional numerical model considering the water-tunnel-soil coupling is established and adopted to investigate the tunnel’s dynamic responses. The movement and sloshing of the internal water are simulated using the multi-material Arbitrary Lagrangian Eulerian (ALE method. Nonlinear fluid–structure interaction (FSI between tunnel and inner water is treated by using the penalty method. Nonlinear soil-structure interaction (SSI between soil and tunnel is dealt with by using the surface to surface contact algorithm. To overcome computing power limitations and to deal with such a large-scale calculation, a parallel algorithm based on the modified recursive coordinate bisection (MRCB considering the balance of SSI and FSI loads is proposed and used. The whole simulation is accomplished on Dawning 5000 A using the proposed MRCB based parallel algorithm optimized to run on supercomputers. The simulation model and the proposed approaches are validated by comparison with the added mass method. Dynamic responses of the tunnel are analyzed and the parallelism is discussed. Besides, factors affecting the dynamic responses are investigated. Better speedup and parallel efficiency show the scalability of the parallel method and the analysis results can be used to aid in the design of water conveyance tunnels.

  12. Comparing the Effects of Different Smoothing Algorithms on the Assessment of Dimensionality of Ordered Categorical Items with Parallel Analysis.

    Science.gov (United States)

    Debelak, Rudolf; Tran, Ulrich S

    2016-01-01

    The analysis of polychoric correlations via principal component analysis and exploratory factor analysis are well-known approaches to determine the dimensionality of ordered categorical items. However, the application of these approaches has been considered as critical due to the possible indefiniteness of the polychoric correlation matrix. A possible solution to this problem is the application of smoothing algorithms. This study compared the effects of three smoothing algorithms, based on the Frobenius norm, the adaption of the eigenvalues and eigenvectors, and on minimum-trace factor analysis, on the accuracy of various variations of parallel analysis by the means of a simulation study. We simulated different datasets which varied with respect to the size of the respondent sample, the size of the item set, the underlying factor model, the skewness of the response distributions and the number of response categories in each item. We found that a parallel analysis and principal component analysis of smoothed polychoric and Pearson correlations led to the most accurate results in detecting the number of major factors in simulated datasets when compared to the other methods we investigated. Of the methods used for smoothing polychoric correlation matrices, we recommend the algorithm based on minimum trace factor analysis.

  13. A g-factor metric for k-t SENSE and k-t PCA based parallel imaging.

    Science.gov (United States)

    Binter, Christian; Ramb, Rebecca; Jung, Bernd; Kozerke, Sebastian

    2016-02-01

    To propose and validate a g-factor formalism for k-t SENSE, k-t PCA and related k-t methods for assessing SNR and temporal fidelity. An analytical gxf -factor formulation in the spatiotemporal frequency domain is derived, enabling assessment of noise and depiction fidelity in both the spatial and frequency domain. Using pseudoreplica analysis of cardiac cine data the gxf -factor description is validated and example data are used to analyze the performance of k-t methods for various parameter settings. Analytical gxf -factor maps were found to agree well with pseudoreplica analysis for 3x, 5x, and 7x k-t SENSE and k-t PCA. While k-t SENSE resulted in lower average gxf values (gx (avg) ) in static regions when compared with k-t PCA, k-t PCA yielded lower gx (avg) values in dynamic regions. Temporal transfer was better preserved with k-t PCA for increasing undersampling factors. The proposed gxf -factor and temporal transfer formalism allows assessing noise performance and temporal depiction fidelity of k-t methods including k-t SENSE and k-t PCA. The framework enables quantitative comparison of different k-t methods relative to frame-by-frame parallel imaging reconstruction. © 2015 Wiley Periodicals, Inc.

  14. Digital tomosynthesis parallel imaging computational analysis with shift and add and back projection reconstruction algorithms.

    Science.gov (United States)

    Chen, Ying; Balla, Apuroop; Rayford II, Cleveland E; Zhou, Weihua; Fang, Jian; Cong, Linlin

    2010-01-01

    Digital tomosynthesis is a novel technology that has been developed for various clinical applications. Parallel imaging configuration is utilised in a few tomosynthesis imaging areas such as digital chest tomosynthesis. Recently, parallel imaging configuration for breast tomosynthesis began to appear too. In this paper, we present the investigation on computational analysis of impulse response characterisation as the start point of our important research efforts to optimise the parallel imaging configurations. Results suggest that impulse response computational analysis is an effective method to compare and optimise imaging configurations.

  15. Analysis for Parallel Execution without Performing Hardware/Software Co-simulation

    OpenAIRE

    Muhammad Rashid

    2014-01-01

    Hardware/software co-simulation improves the performance of embedded applications by executing the applications on a virtual platform before the actual hardware is available in silicon. However, the virtual platform of the target architecture is often not available during early stages of the embedded design flow. Consequently, analysis for parallel execution without performing hardware/software co-simulation is required. This article presents an analysis methodology for parallel execution of ...

  16. "Factor Analysis Using ""R"""

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2013-02-01

    Full Text Available R (R Development Core Team, 2011 is a very powerful tool to analyze data, that is gaining in popularity due to its costs (its free and flexibility (its open-source. This article gives a general introduction to using R (i.e., loading the program, using functions, importing data. Then, using data from Canivez, Konold, Collins, and Wilson (2009, this article walks the user through how to use the program to conduct factor analysis, from both an exploratory and confirmatory approach.

  17. Fourier analysis of parallel inexact Block-Jacobi splitting with transport synthetic acceleration in slab geometry

    International Nuclear Information System (INIS)

    Rosa, M.; Warsa, J. S.; Chang, J. H.

    2006-01-01

    A Fourier analysis is conducted for the discrete-ordinates (SN) approximation of the neutron transport problem solved with Richardson iteration (Source Iteration) and Richardson iteration preconditioned with Transport Synthetic Acceleration (TSA), using the Parallel Block-Jacobi (PBJ) algorithm. Both 'traditional' TSA (TTSA) and a 'modified' TSA (MTSA), in which only the scattering in the low order equations is reduced by some non-negative factor β and < 1, are considered. The results for the un-accelerated algorithm show that convergence of the PBJ algorithm can degrade. The PBJ algorithm with TTSA can be effective provided the β parameter is properly tuned for a given scattering ratio c, but is potentially unstable. Compared to TTSA, MTSA is less sensitive to the choice of β, more effective for the same computational effort (c'), and it is unconditionally stable. (authors)

  18. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  19. Diderot: a Domain-Specific Language for Portable Parallel Scientific Visualization and Image Analysis.

    Science.gov (United States)

    Kindlmann, Gordon; Chiw, Charisee; Seltzer, Nicholas; Samuels, Lamont; Reppy, John

    2016-01-01

    Many algorithms for scientific visualization and image analysis are rooted in the world of continuous scalar, vector, and tensor fields, but are programmed in low-level languages and libraries that obscure their mathematical foundations. Diderot is a parallel domain-specific language that is designed to bridge this semantic gap by providing the programmer with a high-level, mathematical programming notation that allows direct expression of mathematical concepts in code. Furthermore, Diderot provides parallel performance that takes advantage of modern multicore processors and GPUs. The high-level notation allows a concise and natural expression of the algorithms and the parallelism allows efficient execution on real-world datasets.

  20. Parallel Enhancements of the General Mission Analysis Tool, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The General Mission Analysis Tool (GMAT) is a state of the art spacecraft mission design tool under active development at NASA's Goddard Space Flight Center (GSFC)....

  1. Parallel Flux Tensor Analysis for Efficient Moving Object Detection

    Science.gov (United States)

    2011-07-01

    sensing and layered sensor fusion. Such agile sensor networks need to be further en- hanced to minimize overall power consumption under the constraint of...but also higher power consumption . The speed-up of the multicore flux tensor implementation ranged from a factor of 11 to 20 for the smaller SD video...1107. [7] S. Ali and M. Shah, “ COCOA - Tracking in aerial imagery,” in SPIE Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and

  2. Non linear stability analysis of parallel channels with natural circulation

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Ashish Mani; Singh, Suneet, E-mail: suneet.singh@iitb.ac.in

    2016-12-01

    Highlights: • Nonlinear instabilities in natural circulation loop are studied. • Generalized Hopf points, Sub and Supercritical Hopf bifurcations are identified. • Bogdanov–Taken Point (BT Point) is observed by nonlinear stability analysis. • Effect of parameters on stability of system is studied. - Abstract: Linear stability analysis of two-phase flow in natural circulation loop is quite extensively studied by many researchers in past few years. It can be noted that linear stability analysis is limited to the small perturbations only. It is pointed out that such systems typically undergo Hopf bifurcation. If the Hopf bifurcation is subcritical, then for relatively large perturbation, the system has unstable limit cycles in the (linearly) stable region in the parameter space. Hence, linear stability analysis capturing only infinitesimally small perturbations is not sufficient. In this paper, bifurcation analysis is carried out to capture the non-linear instability of the dynamical system and both subcritical and supercritical bifurcations are observed. The regions in the parameter space for which subcritical and supercritical bifurcations exist are identified. These regions are verified by numerical simulation of the time-dependent, nonlinear ODEs for the selected points in the operating parameter space using MATLAB ODE solver.

  3. Methods for Force Analysis of Overconstrained Parallel Mechanisms: A Review

    Science.gov (United States)

    Liu, Wen-Lan; Xu, Yun-Dou; Yao, Jian-Tao; Zhao, Yong-Sheng

    2017-11-01

    The force analysis of overconstrained PMs is relatively complex and difficult, for which the methods have always been a research hotspot. However, few literatures analyze the characteristics and application scopes of the various methods, which is not convenient for researchers and engineers to master and adopt them properly. A review of the methods for force analysis of both passive and active overconstrained PMs is presented. The existing force analysis methods for these two kinds of overconstrained PMs are classified according to their main ideas. Each category is briefly demonstrated and evaluated from such aspects as the calculation amount, the comprehensiveness of considering limbs' deformation, and the existence of explicit expressions of the solutions, which provides an important reference for researchers and engineers to quickly find a suitable method. The similarities and differences between the statically indeterminate problem of passive overconstrained PMs and that of active overconstrained PMs are discussed, and a universal method for these two kinds of overconstrained PMs is pointed out. The existing deficiencies and development directions of the force analysis methods for overconstrained systems are indicated based on the overview.

  4. TSimpleAnalysis: histogramming many trees in parallel

    CERN Document Server

    Giommi, Luca

    2016-01-01

    I worked inside the ROOT team of EP-SFT group. My project focuses on writing a ROOT class that has the aim of creating histograms from a TChain. The name of the class is TSimpleAnalysis and it is already integrated in ROOT. The work that I have done was to write the source, the header le of the class and also a python script, that allows to the user to use the class through the command line. This represents a great improvement respect to the usual user code that counts lines and lines of code to do the same thing. (Link for the class: https://root.cern.ch/doc/master/classTSimpleAnalysis.html)

  5. Evaluation of Apache Hadoop for parallel data analysis with ROOT

    International Nuclear Information System (INIS)

    Lehrack, S; Duckeck, G; Ebke, J

    2014-01-01

    The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.

  6. Evaluation of Apache Hadoop for parallel data analysis with ROOT

    Science.gov (United States)

    Lehrack, S.; Duckeck, G.; Ebke, J.

    2014-06-01

    The Apache Hadoop software is a Java based framework for distributed processing of large data sets across clusters of computers, using the Hadoop file system (HDFS) for data storage and backup and MapReduce as a processing platform. Hadoop is primarily designed for processing large textual data sets which can be processed in arbitrary chunks, and must be adapted to the use case of processing binary data files which cannot be split automatically. However, Hadoop offers attractive features in terms of fault tolerance, task supervision and control, multi-user functionality and job management. For this reason, we evaluated Apache Hadoop as an alternative approach to PROOF for ROOT data analysis. Two alternatives in distributing analysis data were discussed: either the data was stored in HDFS and processed with MapReduce, or the data was accessed via a standard Grid storage system (dCache Tier-2) and MapReduce was used only as execution back-end. The focus in the measurements were on the one hand to safely store analysis data on HDFS with reasonable data rates and on the other hand to process data fast and reliably with MapReduce. In the evaluation of the HDFS, read/write data rates from local Hadoop cluster have been measured and compared to standard data rates from the local NFS installation. In the evaluation of MapReduce, realistic ROOT analyses have been used and event rates have been compared to PROOF.

  7. Analysis of Droop Controlled Parallel Inverters in Islanded Microgrids

    DEFF Research Database (Denmark)

    Mariani, Valerio; Vasca, Francesco; Guerrero, Josep M.

    2014-01-01

    Three-phase droop controlled inverters are widely used in islanded microgrids to interface distributed energy resources and to provide for the loads active and reactive powers demand. The assessment of microgrids stability, affected by the control and line parameters, is a stringent issue....... This paper shows a systematic approach to derive a closed loop model of the microgrid and then to perform an eigenvalues analysis that highlights how the system’s parameters affect the stability of the network. It is also shown that by means of a singular perturbation approach the resulting reduced order...

  8. Climate change and daily press : Italy vs Usa parallel analysis

    International Nuclear Information System (INIS)

    Borrelli, G.; Mazzotta, V.; Falconi, C.; Grossi, R.; Farabollini, F.

    1996-06-01

    Among ENEA (Italian National Agency for New Technologies, Energy, and the Environment) activities, one deals with analysis and strategies of environmental information. A survey of four daily newspaper coverage, on an issue (Global Climate Change) belonging to this area, has been realized. The involved newspapers are: two Italian ones, namely 'La Repubblica' and 'Il Corriere della Sera', two North-American ones, namely 'New York Times' and 'Washington Post'. Purpose of the work was that of detecting the qualitative and quantitative level of consciousness of the Italian press via a comparison with the North-American press, notoriously sensible and careful on environmental issues. The number of articled analyzed is partitioned in the following numerical data: 319 for the 'New York Times', 309 for the 'Washington Post', 146 for the 'Corriere della Sera', 81 articles for 'La Repubblica'. The time period covered for the analysis spans from 1989, initiatic year for the organization of the 1992 Rio Conference, to December 1994, deadline date for the submission of national

  9. Scientific data analysis on data-parallel platforms.

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, Craig D.; Bayer, Gregory W.; Choe, Yung Ryn; Roe, Diana C.

    2010-09-01

    As scientific computing users migrate to petaflop platforms that promise to generate multi-terabyte datasets, there is a growing need in the community to be able to embed sophisticated analysis algorithms in the computing platforms' storage systems. Data Warehouse Appliances (DWAs) are attractive for this work, due to their ability to store and process massive datasets efficiently. While DWAs have been utilized effectively in data-mining and informatics applications, they remain largely unproven in scientific workloads. In this paper we present our experiences in adapting two mesh analysis algorithms to function on five different DWA architectures: two Netezza database appliances, an XtremeData dbX database, a LexisNexis DAS, and multiple Hadoop MapReduce clusters. The main contribution of this work is insight into the differences between these DWAs from a user's perspective. In addition, we present performance measurements for ten DWA systems to help understand the impact of different architectural trade-offs in these systems.

  10. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  11. Functional efficiency comparison between split- and parallel-hybrid using advanced energy flow analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Guttenberg, Philipp; Lin, Mengyan [Romax Technology, Nottingham (United Kingdom)

    2009-07-01

    The following paper presents a comparative efficiency analysis of the Toyota Prius versus the Honda Insight using advanced Energy Flow Analysis methods. The sample study shows that even very different hybrid concepts like a split- and a parallel-hybrid can be compared in a high level of detail and demonstrates the benefit showing exemplary results. (orig.)

  12. Modeling and Grid impedance Variation Analysis of Parallel Connected Grid Connected Inverter based on Impedance Based Harmonic Analysis

    DEFF Research Database (Denmark)

    Kwon, JunBum; Wang, Xiongfei; Bak, Claus Leth

    2014-01-01

    This paper addresses the harmonic compensation error problem existing with parallel connected inverter in the same grid interface conditions by means of impedance-based analysis and modeling. Unlike the single grid connected inverter, it is found that multiple parallel connected inverters and grid...... impedance can make influence to each other if they each have a harmonic compensation function. The analysis method proposed in this paper is based on the relationship between the overall output impedance and input impedance of parallel connected inverter, where controller gain design method, which can...

  13. Parallel Index and Query for Large Scale Data Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  14. Parallelization and scheduling of data intensive particle physics analysis jobs on clusters of PCs

    CERN Document Server

    Ponce, S

    2004-01-01

    Summary form only given. Scheduling policies are proposed for parallelizing data intensive particle physics analysis applications on computer clusters. Particle physics analysis jobs require the analysis of tens of thousands of particle collision events, each event requiring typically 200ms processing time and 600KB of data. Many jobs are launched concurrently by a large number of physicists. At a first view, particle physics jobs seem to be easy to parallelize, since particle collision events can be processed independently one from another. However, since large amounts of data need to be accessed, the real challenge resides in making an efficient use of the underlying computing resources. We propose several job parallelization and scheduling policies aiming at reducing job processing times and at increasing the sustainable load of a cluster server. Since particle collision events are usually reused by several jobs, cache based job splitting strategies considerably increase cluster utilization and reduce job ...

  15. Factors affecting construction performance: exploratory factor analysis

    Science.gov (United States)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  16. Analysis of jacobian and singularity of planar parallel robots using screw theory

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jung Hyun; Lee, Jeh Won; Lee, Hyuk Jin [Yeungnam Univ., Gyeongsan (Korea, Republic of)

    2012-11-15

    The Jacobian and singularity analysis of parallel robots is necessary to analyze robot motion. The derivations of the Jacobian matrix and singularity configuration are complicated and have no geometrical earning in the velocity form of the Jacobian matrix. In this study, the screw theory is used to derive the Jacobian of parallel robots. The statics form of the Jacobian has a geometrical meaning. In addition, singularity analysis can be performed by using the geometrical values. Furthermore, this study shows that the screw theory is applicable to redundantly actuated robots as well as non redundant robots.

  17. Screw-System-Based Mobility Analysis of a Family of Fully Translational Parallel Manipulators

    Directory of Open Access Journals (Sweden)

    Ernesto Rodriguez-Leal

    2013-01-01

    Full Text Available This paper investigates the mobility of a family of fully translational parallel manipulators based on screw system analysis by identifying the common constraint and redundant constraints, providing a case study of this approach. The paper presents the branch motion-screws for the 3-RP̲C-Y parallel manipulator, the 3-RCC-Y (or 3-RP̲RC-Y parallel manipulator, and a newly proposed 3-RP̲C-T parallel manipulator. Then the paper determines the sets of platform constraint-screws for each of these three manipulators. The constraints exerted on the platforms of the 3-RP̲C architectures and the 3-RCC-Y manipulators are analyzed using the screw system approach and have been identified as couples. A similarity has been identified with the axes of couples: they are perpendicular to the R joint axes, but in the former the axes are coplanar with the base and in the latter the axes are perpendicular to the limb. The remaining couples act about the axis that is normal to the base. The motion-screw system and constraint-screw system analysis leads to the insightful understanding of the mobility of the platform that is then obtained by determining the reciprocal screws to the platform constraint screw sets, resulting in three independent instantaneous translational degrees-of-freedom. To validate the mobility analysis of the three parallel manipulators, the paper includes motion simulations which use a commercially available kinematics software.

  18. Study on Parallel Processing for Efficient Flexible Multibody Analysis based on Subsystem Synthesis Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jong-Boo; Song, Hajun; Kim, Sung-Soo [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-06-15

    Flexible multibody simulations are widely used in the industry to design mechanical systems. In flexible multibody dynamics, deformation coordinates are described either relatively in the body reference frame that is floating in the space or in the inertial reference frame. Moreover, these deformation coordinates are generated based on the discretization of the body according to the finite element approach. Therefore, the formulation of the flexible multibody system always deals with a huge number of degrees of freedom and the numerical solution methods require a substantial amount of computational time. Parallel computational methods are a solution for efficient computation. However, most of the parallel computational methods are focused on the efficient solution of large-sized linear equations. For multibody analysis, we need to develop an efficient formulation that could be suitable for parallel computation. In this paper, we developed a subsystem synthesis method for a flexible multibody system and proposed efficient parallel computational schemes based on the OpenMP API in order to achieve efficient computation. Simulations of a rotating blade system, which consists of three identical blades, were carried out with two different parallel computational schemes. Actual CPU times were measured to investigate the efficiency of the proposed parallel schemes.

  19. Analysis on detection accuracy of binocular photoelectric instrument optical axis parallelism digital calibration instrument

    Science.gov (United States)

    Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan

    2017-11-01

    Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.

  20. State-space-based harmonic stability analysis for paralleled grid-connected inverters

    DEFF Research Database (Denmark)

    Wang, Yanbo; Wang, Xiongfei; Chen, Zhe

    2016-01-01

    This paper addresses a state-space-based harmonic stability analysis of paralleled grid-connected inverters system. A small signal model of individual inverter is developed, where LCL filter, the equivalent delay of control system, and current controller are modeled. Then, the overall small signal...... model of paralleled grid-connected inverters is built. Finally, the state space-based stability analysis approach is developed to explain the harmonic resonance phenomenon. The eigenvalue traces associated with time delay and coupled grid impedance are obtained, which accounts for how the unstable...... inverter produces the harmonic resonance and leads to the instability of whole paralleled system. The proposed approach reveals the contributions of the grid impedance as well as the coupled effect on other grid-connected inverters under different grid conditions. Simulation and experimental results...

  1. Dynamic and Control Analysis of Modular Multi-Parallel Rectifiers (MMR)

    DEFF Research Database (Denmark)

    Zare, Firuz; Ghosh, Arindam; Davari, Pooya

    2017-01-01

    This paper presents dynamic analysis of a Modular Multi-Parallel Rectifier (MMR) based on state-space modelling and analysis. The proposed topology is suitable for high power application which can reduce line current harmonics emissions significantly. However, a proper controller is required...... to share and control current through each rectifier. Mathematical analysis and preliminary simulations have been carried out to verify the proposed controller under different operating conditions....

  2. Turbo-SMT: Parallel Coupled Sparse Matrix-Tensor Factorizations and Applications

    Science.gov (United States)

    Papalexakis, Evangelos E.; Faloutsos, Christos; Mitchell, Tom M.; Talukdar, Partha Pratim; Sidiropoulos, Nicholas D.; Murphy, Brian

    2016-01-01

    How can we correlate the neural activity in the human brain as it responds to typed words, with properties of these terms (like ’edible’, ’fits in hand’)? In short, we want to find latent variables, that jointly explain both the brain activity, as well as the behavioral responses. This is one of many settings of the Coupled Matrix-Tensor Factorization (CMTF) problem. Can we enhance any CMTF solver, so that it can operate on potentially very large datasets that may not fit in main memory? We introduce Turbo-SMT, a meta-method capable of doing exactly that: it boosts the performance of any CMTF algorithm, produces sparse and interpretable solutions, and parallelizes any CMTF algorithm, producing sparse and interpretable solutions (up to 65 fold). Additionally, we improve upon ALS, the work-horse algorithm for CMTF, with respect to efficiency and robustness to missing values. We apply Turbo-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. Turbo-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy. Finally, we demonstrate the generality of Turbo-SMT, by applying it on a Facebook dataset (users, ’friends’, wall-postings); there, Turbo-SMT spots spammer-like anomalies. PMID:27672406

  3. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    National Research Council Canada - National Science Library

    Steed, Chad A; Fitzpatrick, Patrick J; Jankun-Kelly, T. J; Swan II, J. E

    2008-01-01

    ... for a particular dependent variable. These capabilities are combined into a unique visualization system that is demonstrated via a North Atlantic hurricane climate study using a systematic workflow. This research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets.

  4. Stiffness analysis and comparison of a Biglide parallel grinder with alternative spatial modular parallelograms

    DEFF Research Database (Denmark)

    Wu, Guanglei; Zou, Ping

    2017-01-01

    This paper deals with the stiffness modeling, analysis and comparison of a Biglide parallel grinder with two alternative modular parallelograms. It turns out that the Cartesian stiffness matrix of the manipulator has the property that it can be decoupled into two homogeneous matrices, correspondi...

  5. Operation States Analysis of the Series-Parallel resonant Converter Working Above Resonance Frequency

    Directory of Open Access Journals (Sweden)

    Peter Dzurko

    2007-01-01

    Full Text Available Operation states analysis of a series-parallel converter working above resonance frequency is described in the paper. Principal equations are derived for individual operation states. On the basis of them the diagrams are made out. The diagrams give the complex image of the converter behaviour for individual circuit parameters. The waveforms may be utilised at designing the inverter individual parts.

  6. Alleviating Search Uncertainty through Concept Associations: Automatic Indexing, Co-Occurrence Analysis, and Parallel Computing.

    Science.gov (United States)

    Chen, Hsinchun; Martinez, Joanne; Kirchhoff, Amy; Ng, Tobun D.; Schatz, Bruce R.

    1998-01-01

    Grounded on object filtering, automatic indexing, and co-occurrence analysis, an experiment was performed using a parallel supercomputer to analyze over 400,000 abstracts in an INSPEC computer engineering collection. A user evaluation revealed that system-generated thesauri were better than the human-generated INSPEC subject thesaurus in concept…

  7. Operation Analysis of the Series-Parallel Resonant Converter Working above Resonance Frequency

    Directory of Open Access Journals (Sweden)

    Peter Dzurko

    2006-01-01

    Full Text Available The present article deals with theoretical analysis of operation of a series-parallel converter working above resonance frequency. Derived are principal equations for individual operation intervals. Based on these made out are waveforms of individual quantities during both the inverter operation at load and no-load operation. The waveforms may be utilised at designing the inverter individual parts.

  8. Analysis and Modeling of Circulating Current in Two Parallel-Connected Inverters

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Gohil, Ghanshyamsinh Vijaysinh; Bede, Lorand

    2015-01-01

    Parallel-connected inverters are gaining attention for high power applications because of the limited power handling capability of the power modules. Moreover, the parallel-connected inverters may have low total harmonic distortion of the ac current if they are operated with the interleaved pulse...... this model, the circulating current between two parallel-connected inverters is analysed in this study. The peak and root mean square (rms) values of the normalised circulating current are calculated for different PWM methods, which makes this analysis a valuable tool to design a filter for the circulating......-width modulation (PWM). However, the interleaved PWM causes a circulating current between the inverters, which in turn causes additional losses. A model describing the dynamics of the circulating current is presented in this study which shows that the circulating current depends on the common-mode voltage. Using...

  9. Analysis of gamma irradiator dose rate using spent fuel elements with parallel configuration

    International Nuclear Information System (INIS)

    Setiyanto; Pudjijanto MS; Ardani

    2006-01-01

    To enhance the utilization of the RSG-GAS reactor spent fuel, the gamma irradiator using spent fuel elements as a gamma source is a suitable choice. This irradiator can be used for food sterilization and preservation. The first step before realization, it is necessary to determine the gamma dose rate theoretically. The assessment was realized for parallel configuration fuel elements with the irradiation space can be placed between fuel element series. This analysis of parallel model was choice to compare with the circle model and as long as possible to get more space for irradiation and to do manipulation of irradiation target. Dose rate calculation were done with MCNP, while the estimation of gamma activities of fuel element was realized by OREGEN code with 1 year of average delay time. The calculation result show that the gamma dose rate of parallel model decreased up to 50% relatively compared with the circle model, but the value still enough for sterilization and preservation. Especially for food preservation, this parallel model give more flexible, while the gamma dose rate can be adjusted to the irradiation needed. The conclusion of this assessment showed that the utilization of reactor spent fuels for gamma irradiator with parallel model give more advantage the circle model. (author)

  10. Factor analysis of multivariate data

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Mahadevan, R.

    A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...

  11. Parallel Hybrid Gas-Electric Geared Turbofan Engine Conceptual Design and Benefits Analysis

    Science.gov (United States)

    Lents, Charles; Hardin, Larry; Rheaume, Jonathan; Kohlman, Lee

    2016-01-01

    The conceptual design of a parallel gas-electric hybrid propulsion system for a conventional single aisle twin engine tube and wing vehicle has been developed. The study baseline vehicle and engine technology are discussed, followed by results of the hybrid propulsion system sizing and performance analysis. The weights analysis for the electric energy storage & conversion system and thermal management system is described. Finally, the potential system benefits are assessed.

  12. Ethnicity Modifies Associations between Cardiovascular Risk Factors and Disease Severity in Parallel Dutch and Singapore Coronary Cohorts.

    Directory of Open Access Journals (Sweden)

    Crystel M Gijsberts

    Full Text Available In 2020 the largest number of patients with coronary artery disease (CAD will be found in Asia. Published epidemiological and clinical reports are overwhelmingly derived from western (White cohorts and data from Asia are scant. We compared CAD severity and all-cause mortality among 4 of the world's most populous ethnicities: Whites, Chinese, Indians and Malays.The UNIted CORoNary cohort (UNICORN simultaneously enrolled parallel populations of consecutive patients undergoing coronary angiography or intervention for suspected CAD in the Netherlands and Singapore. Using multivariable ordinal regression, we investigated the independent association of ethnicity with CAD severity and interactions between risk factors and ethnicity on CAD severity. Also, we compared all-cause mortality among the ethnic groups using multivariable Cox regression analysis.We included 1,759 White, 685 Chinese, 201 Indian and 224 Malay patients undergoing coronary angiography. We found distinct inter-ethnic differences in cardiovascular risk factors. Furthermore, the associations of gender and diabetes with severity of CAD were significantly stronger in Chinese than Whites. Chinese (OR 1.3 [1.1-1.7], p = 0.008 and Malay (OR 1.9 [1.4-2.6], p<0.001 ethnicity were independently associated with more severe CAD as compared to White ethnicity. Strikingly, when stratified for diabetes status, we found a significant association of all three Asian ethnic groups as compared to White ethnicity with more severe CAD among diabetics, but not in non-diabetics. Crude all-cause mortality did not differ, but when adjusted for covariates mortality was higher in Malays than the other ethnic groups.In this population of individuals undergoing coronary angiography, ethnicity is independently associated with the severity of CAD and modifies the strength of association between certain risk factors and CAD severity. Furthermore, mortality differs among ethnic groups. Our data provide insight in

  13. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    International Nuclear Information System (INIS)

    Heo, Jaeseok; Kim, Kyung Doo

    2015-01-01

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper

  14. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    Science.gov (United States)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  15. PAPIRUS, a parallel computing framework for sensitivity analysis, uncertainty propagation, and estimation of parameter distribution

    Energy Technology Data Exchange (ETDEWEB)

    Heo, Jaeseok, E-mail: jheo@kaeri.re.kr; Kim, Kyung Doo, E-mail: kdkim@kaeri.re.kr

    2015-10-15

    Highlights: • We developed an interface between an engineering simulation code and statistical analysis software. • Multiple packages of the sensitivity analysis, uncertainty quantification, and parameter estimation algorithms are implemented in the framework. • Parallel computing algorithms are also implemented in the framework to solve multiple computational problems simultaneously. - Abstract: This paper introduces a statistical data analysis toolkit, PAPIRUS, designed to perform the model calibration, uncertainty propagation, Chi-square linearity test, and sensitivity analysis for both linear and nonlinear problems. The PAPIRUS was developed by implementing multiple packages of methodologies, and building an interface between an engineering simulation code and the statistical analysis algorithms. A parallel computing framework is implemented in the PAPIRUS with multiple computing resources and proper communications between the server and the clients of each processor. It was shown that even though a large amount of data is considered for the engineering calculation, the distributions of the model parameters and the calculation results can be quantified accurately with significant reductions in computational effort. A general description about the PAPIRUS with a graphical user interface is presented in Section 2. Sections 2.1–2.5 present the methodologies of data assimilation, uncertainty propagation, Chi-square linearity test, and sensitivity analysis implemented in the toolkit with some results obtained by each module of the software. Parallel computing algorithms adopted in the framework to solve multiple computational problems simultaneously are also summarized in the paper.

  16. Parallel analysis tools and new visualization techniques for ultra-large climate data set

    Energy Technology Data Exchange (ETDEWEB)

    Middleton, Don [National Center for Atmospheric Research, Boulder, CO (United States); Haley, Mary [National Center for Atmospheric Research, Boulder, CO (United States)

    2014-12-10

    ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

  17. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  18. Asterias: A Parallelized Web-based Suite for the Analysis of Expression and aCGH Data

    Directory of Open Access Journals (Sweden)

    Ramón Díaz-Uriarte

    2007-01-01

    Full Text Available The analysis of expression and CGH arrays plays a central role in the study of complex diseases, especially cancer, including finding markers for early diagnosis and prognosis, choosing an optimal therapy, or increasing our understanding of cancer development and metastasis. Asterias (http://www.asterias.info is an integrated collection of freely-accessible web tools for the analysis of gene expression and aCGH data. Most of the tools use parallel computing (via MPI and run on a server with 60 CPUs for computation; compared to a desktop or server-based but not parallelized application, parallelization provides speed ups of factors up to 50. Most of our applications allow the user to obtain additional information for user-selected genes (chromosomal location, PubMed ids, Gene Ontology terms, etc. by using clickable links in tables and/or fi gures. Our tools include: normalization of expression and aCGH data (DNMAD; converting between different types of gene/clone and protein identifi ers (IDconverter/IDClight; fi ltering and imputation (preP; finding differentially expressed genes related to patient class and survival data (Pomelo II; searching for models of class prediction (Tnasas; using random forests to search for minimal models for class prediction or for large subsets of genes with predictive capacity (GeneSrF; searching for molecular signatures and predictive genes with survival data (SignS; detecting regions of genomic DNA gain or loss (ADaCGH. The capability to send results between different applications, access to additional functional information, and parallelized computation make our suite unique and exploit features only available to web-based applications.

  19. Stability of tapered and parallel-walled dental implants: A systematic review and meta-analysis.

    Science.gov (United States)

    Atieh, Momen A; Alsabeeha, Nabeel; Duncan, Warwick J

    2018-05-15

    Clinical trials have suggested that dental implants with a tapered configuration have improved stability at placement, allowing immediate placement and/or loading. The aim of this systematic review and meta-analysis was to evaluate the implant stability of tapered dental implants compared to standard parallel-walled dental implants. Applying the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement, randomized controlled trials (RCTs) were searched for in electronic databases and complemented by hand searching. The risk of bias was assessed using the Cochrane Collaboration's Risk of Bias tool and data were analyzed using statistical software. A total of 1199 studies were identified, of which, five trials were included with 336 dental implants in 303 participants. Overall meta-analysis showed that tapered dental implants had higher implant stability values than parallel-walled dental implants at insertion and 8 weeks but the difference was not statistically significant. Tapered dental implants had significantly less marginal bone loss compared to parallel-walled dental implants. No significant differences in implant failure rate were found between tapered and parallel-walled dental implants. There is limited evidence to demonstrate the effectiveness of tapered dental implants in achieving greater implant stability compared to parallel-walled dental implants. Superior short-term results in maintaining peri-implant marginal bone with tapered dental implants are possible. Further properly designed RCTs are required to endorse the supposed advantages of tapered dental implants in immediate loading protocol and other complex clinical scenarios. © 2018 Wiley Periodicals, Inc.

  20. Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis

    Science.gov (United States)

    Chiou, Jin-Chern

    1990-01-01

    Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.

  1. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    Energy Technology Data Exchange (ETDEWEB)

    Toshimitsu, Fujisawa [Tokyo Univ., Collaborative Research Center of Frontier Simulation Software for Industrial Science, Institute of Industrial Science (Japan); Genki, Yagawa [Tokyo Univ., Department of Quantum Engineering and Systems Science (Japan)

    2003-07-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  2. Node-based finite element method for large-scale adaptive fluid analysis in parallel environments

    International Nuclear Information System (INIS)

    Toshimitsu, Fujisawa; Genki, Yagawa

    2003-01-01

    In this paper, a FEM-based (finite element method) mesh free method with a probabilistic node generation technique is presented. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed fluently in parallel in terms of nodes. Local finite element mesh is generated robustly around each node, even for harsh boundary shapes such as cracks. The algorithm and the data structure of finite element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. In addition, the node-based finite element method is accompanied by a probabilistic node generation technique, which generates good-natured points for nodes of finite element mesh. Furthermore, the probabilistic node generation technique can be performed in parallel environments. As a numerical example of the proposed method, we perform a compressible flow simulation containing strong shocks. Numerical simulations with frequent mesh refinement, which are required for such kind of analysis, can effectively be performed on parallel processors by using the proposed method. (authors)

  3. A Massively Parallel Solver for the Mechanical Harmonic Analysis of Accelerator Cavities

    International Nuclear Information System (INIS)

    2015-01-01

    ACE3P is a 3D massively parallel simulation suite that developed at SLAC National Accelerator Laboratory that can perform coupled electromagnetic, thermal and mechanical study. Effectively utilizing supercomputer resources, ACE3P has become a key simulation tool for particle accelerator R and D. A new frequency domain solver to perform mechanical harmonic response analysis of accelerator components is developed within the existing parallel framework. This solver is designed to determine the frequency response of the mechanical system to external harmonic excitations for time-efficient accurate analysis of the large-scale problems. Coupled with the ACE3P electromagnetic modules, this capability complements a set of multi-physics tools for a comprehensive study of microphonics in superconducting accelerating cavities in order to understand the RF response and feedback requirements for the operational reliability of a particle accelerator. (auth)

  4. Parallel computation of aerodynamic influence coefficients for aeroelastic analysis on a transputer network

    Science.gov (United States)

    Janetzke, D. C.; Murthy, D. V.

    1991-01-01

    Aeroelastic analysis is mult-disciplinary and computationally expensive. Hence, it can greatly benefit from parallel processing. As part of an effort to develop an aeroelastic analysis capability on a distributed-memory transputer network, a parallel algorithm for the computation of aerodynamic influence coefficients is implemented on a network of 32 transputers. The aerodynamic influence coefficients are calculated using a three-dimensional unsteady aerodynamic model and a panel discretization. Efficiencies up to 85 percent are demonstrated using 32 processors. The effects of subtask ordering, problem size and network topology are presented. A comparison to results on a shared-memory computer indicates that higher speedup is achieved on the distributed-memory system.

  5. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  6. Reliability and mass analysis of dynamic power conversion systems with parallel or standby redundancy

    Science.gov (United States)

    Juhasz, Albert J.; Bloomfield, Harvey S.

    1987-01-01

    A combinatorial reliability approach was used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis was also performed, specifically for a 100-kWe nuclear Brayton power conversion system with parallel redundancy. Although this study was done for a reactor outlet temperature of 1100 K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  7. Reliability and mass analysis of dynamic power conversion systems with parallel of standby redundancy

    Science.gov (United States)

    Juhasz, A. J.; Bloomfield, H. S.

    1985-01-01

    A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  8. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    Science.gov (United States)

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  9. Advanced mathematical on-line analysis in nuclear experiments. Usage of parallel computing CUDA routines in standard root analysis

    Science.gov (United States)

    Grzeszczuk, A.; Kowalski, S.

    2015-04-01

    Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs) for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.

  10. First course in factor analysis

    CERN Document Server

    Comrey, Andrew L

    2013-01-01

    The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of

  11. cudaBayesreg: Parallel Implementation of a Bayesian Multilevel Model for fMRI Data Analysis

    Directory of Open Access Journals (Sweden)

    Adelino R. Ferreira da Silva

    2011-10-01

    Full Text Available Graphic processing units (GPUs are rapidly gaining maturity as powerful general parallel computing devices. A key feature in the development of modern GPUs has been the advancement of the programming model and programming tools. Compute Unified Device Architecture (CUDA is a software platform for massively parallel high-performance computing on Nvidia many-core GPUs. In functional magnetic resonance imaging (fMRI, the volume of the data to be processed, and the type of statistical analysis to perform call for high-performance computing strategies. In this work, we present the main features of the R-CUDA package cudaBayesreg which implements in CUDA the core of a Bayesian multilevel model for the analysis of brain fMRI data. The statistical model implements a Gibbs sampler for multilevel/hierarchical linear models with a normal prior. The main contribution for the increased performance comes from the use of separate threads for fitting the linear regression model at each voxel in parallel. The R-CUDA implementation of the Bayesian model proposed here has been able to reduce significantly the run-time processing of Markov chain Monte Carlo (MCMC simulations used in Bayesian fMRI data analyses. Presently, cudaBayesreg is only configured for Linux systems with Nvidia CUDA support.

  12. Quantitative analysis of pulmonary perfusion using time-resolved parallel 3D MRI - initial results

    International Nuclear Information System (INIS)

    Fink, C.; Buhmann, R.; Plathow, C.; Puderbach, M.; Kauczor, H.U.; Risse, F.; Ley, S.; Meyer, F.J.

    2004-01-01

    Purpose: to assess the use of time-resolved parallel 3D MRI for a quantitative analysis of pulmonary perfusion in patients with cardiopulmonary disease. Materials and methods: eight patients with pulmonary embolism or pulmonary hypertension were examined with a time-resolved 3D gradient echo pulse sequence with parallel imaging techniques (FLASH 3D, TE/TR: 0.8/1.9 ms; flip angle: 40 ; GRAPPA). A quantitative perfusion analysis based on indicator dilution theory was performed using a dedicated software. Results: patients with pulmonary embolism or chronic thromboembolic pulmonary hypertension revealed characteristic wedge-shaped perfusion defects at perfusion MRI. They were characterized by a decreased pulmonary blood flow (PBF) and pulmonary blood volume (PBV) and increased mean transit time (MTT). Patients with primary pulmonary hypertension or eisenmenger syndrome showed a more homogeneous perfusion pattern. The mean MTT of all patients was 3.3 - 4.7 s. The mean PBF and PBV showed a broader interindividual variation (PBF: 104-322 ml/100 ml/min; PBV: 8 - 21 ml/100 ml). Conclusion: time-resolved parallel 3D MRI allows at least a semi-quantitative assessment of lung perfusion. Future studies will have to assess the clinical value of this quantitative information for the diagnosis and management of cardiopulmonary disease. (orig.) [de

  13. Design and Analysis of Cooperative Cable Parallel Manipulators for Multiple Mobile Cranes

    Directory of Open Access Journals (Sweden)

    Bin Zi

    2012-11-01

    Full Text Available The design, dynamic modelling, and workspace are presented in this paper concerning cooperative cable parallel manipulators for multiple mobile cranes (CPMMCs. The CPMMCs can handle complex tasks that are more difficult or even impossible for a single mobile crane. Kinematics and dynamics of the CPMMCs are studied on the basis of geometric methodology and d'Alembert's principle, and a mathematical model of the CPMMCs is developed and presented with dynamic simulation. The constant orientation workspace analysis of the CPMMCs is carried out additionally. As an example, a cooperative cable parallel manipulator for triple mobile cranes with 6 Degrees of Freedom is investigated on the basis of the above design objectives.

  14. Instantaneous Kinematics Analysis via Screw-Theory of a Novel 3-CRC Parallel Mechanism

    Directory of Open Access Journals (Sweden)

    Hussein de la Torre

    2016-06-01

    Full Text Available This paper presents the mobility and kinematics analysis of a novel parallel mechanism that is composed by one base, one platform and three identical limbs with CRC joints. The paper obtains closed-form solutions to the direct and inverse kinematics problems, and determines the mobility of the mechanism and instantaneous kinematics by applying screw theory. The obtained results show that this parallel robot is part of the family 2R1T, since the platform shows 3 DOF, i.e.: one translation perpendicular to the base and two rotations about skew axes. In order to calculate the direct instantaneous kinematics, this paper introduces the vector mh, which is part of the joint velocity vector that multiplies the overall inverse Jacobian matrix. This paper compares the results between simulations and numerical examples using Mathematica and SolidWorks in order to prove the accuracy of the analytical results.

  15. An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems

    KAUST Repository

    Kuwahara, Hiroyuki

    2011-01-01

    Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.

  16. Implementation and analysis of a Navier-Stokes algorithm on parallel computers

    Science.gov (United States)

    Fatoohi, Raad A.; Grosch, Chester E.

    1988-01-01

    The results of the implementation of a Navier-Stokes algorithm on three parallel/vector computers are presented. The object of this research is to determine how well, or poorly, a single numerical algorithm would map onto three different architectures. The algorithm is a compact difference scheme for the solution of the incompressible, two-dimensional, time-dependent Navier-Stokes equations. The computers were chosen so as to encompass a variety of architectures. They are the following: the MPP, an SIMD machine with 16K bit serial processors; Flex/32, an MIMD machine with 20 processors; and Cray/2. The implementation of the algorithm is discussed in relation to these architectures and measures of the performance on each machine are given. The basic comparison is among SIMD instruction parallelism on the MPP, MIMD process parallelism on the Flex/32, and vectorization of a serial code on the Cray/2. Simple performance models are used to describe the performance. These models highlight the bottlenecks and limiting factors for this algorithm on these architectures. Finally, conclusions are presented.

  17. Pteros 2.0: Evolution of the fast parallel molecular analysis library for C++ and python.

    Science.gov (United States)

    Yesylevskyy, Semen O

    2015-07-15

    Pteros is the high-performance open-source library for molecular modeling and analysis of molecular dynamics trajectories. Starting from version 2.0 Pteros is available for C++ and Python programming languages with very similar interfaces. This makes it suitable for writing complex reusable programs in C++ and simple interactive scripts in Python alike. New version improves the facilities for asynchronous trajectory reading and parallel execution of analysis tasks by introducing analysis plugins which could be written in either C++ or Python in completely uniform way. The high level of abstraction provided by analysis plugins greatly simplifies prototyping and implementation of complex analysis algorithms. Pteros is available for free under Artistic License from http://sourceforge.net/projects/pteros/. © 2015 Wiley Periodicals, Inc.

  18. Parallel factor ChIP provides essential internal control for quantitative differential ChIP-seq.

    Science.gov (United States)

    Guertin, Michael J; Cullen, Amy E; Markowetz, Florian; Holding, Andrew N

    2018-04-17

    A key challenge in quantitative ChIP combined with high-throughput sequencing (ChIP-seq) is the normalization of data in the presence of genome-wide changes in occupancy. Analysis-based normalization methods were developed for transcriptomic data and these are dependent on the underlying assumption that total transcription does not change between conditions. For genome-wide changes in transcription factor (TF) binding, these assumptions do not hold true. The challenges in normalization are confounded by experimental variability during sample preparation, processing and recovery. We present a novel normalization strategy utilizing an internal standard of unchanged peaks for reference. Our method can be readily applied to monitor genome-wide changes by ChIP-seq that are otherwise lost or misrepresented through analytical normalization. We compare our approach to normalization by total read depth and two alternative methods that utilize external experimental controls to study TF binding. We successfully resolve the key challenges in quantitative ChIP-seq analysis and demonstrate its application by monitoring the loss of Estrogen Receptor-alpha (ER) binding upon fulvestrant treatment, ER binding in response to estrodiol, ER mediated change in H4K12 acetylation and profiling ER binding in patient-derived xenographs. This is supported by an adaptable pipeline to normalize and quantify differential TF binding genome-wide and generate metrics for differential binding at individual sites.

  19. Lithuanian Population Aging Factors Analysis

    Directory of Open Access Journals (Sweden)

    Agnė Garlauskaitė

    2015-05-01

    Full Text Available The aim of this article is to identify the factors that determine aging of Lithuania’s population and to assess the influence of these factors. The article shows Lithuanian population aging factors analysis, which consists of two main parts: the first describes the aging of the population and its characteristics in theoretical terms. Second part is dedicated to the assessment of trends that influence the aging population and demographic factors and also to analyse the determinants of the aging of the population of Lithuania. After analysis it is concluded in the article that the decline in the birth rate and increase in the number of emigrants compared to immigrants have the greatest impact on aging of the population, so in order to show the aging of the population, a lot of attention should be paid to management of these demographic processes.

  20. Kinematics and dynamics analysis of a quadruped walking robot with parallel leg mechanism

    Science.gov (United States)

    Wang, Hongbo; Sang, Lingfeng; Hu, Xing; Zhang, Dianfan; Yu, Hongnian

    2013-09-01

    It is desired to require a walking robot for the elderly and the disabled to have large capacity, high stiffness, stability, etc. However, the existing walking robots cannot achieve these requirements because of the weight-payload ratio and simple function. Therefore, Improvement of enhancing capacity and functions of the walking robot is an important research issue. According to walking requirements and combining modularization and reconfigurable ideas, a quadruped/biped reconfigurable walking robot with parallel leg mechanism is proposed. The proposed robot can be used for both a biped and a quadruped walking robot. The kinematics and performance analysis of a 3-UPU parallel mechanism which is the basic leg mechanism of a quadruped walking robot are conducted and the structural parameters are optimized. The results show that performance of the walking robot is optimal when the circumradius R, r of the upper and lower platform of leg mechanism are 161.7 mm, 57.7 mm, respectively. Based on the optimal results, the kinematics and dynamics of the quadruped walking robot in the static walking mode are derived with the application of parallel mechanism and influence coefficient theory, and the optimal coordination distribution of the dynamic load for the quadruped walking robot with over-determinate inputs is analyzed, which solves dynamic load coupling caused by the branches’ constraint of the robot in the walk process. Besides laying a theoretical foundation for development of the prototype, the kinematics and dynamics studies on the quadruped walking robot also boost the theoretical research of the quadruped walking and the practical applications of parallel mechanism.

  1. Identification of Genetic Susceptibility to Childhood Cancer through Analysis of Genes in Parallel

    Science.gov (United States)

    Plon, Sharon E.; Wheeler, David A.; Strong, Louise C.; Tomlinson, Gail E.; Pirics, Michael; Meng, Qingchang; Cheung, Hannah C.; Begin, Phyllis R.; Muzny, Donna M.; Lewis, Lora; Biegel, Jaclyn A.; Gibbs, Richard A.

    2011-01-01

    Clinical cancer genetic susceptibility analysis typically proceeds sequentially beginning with the most likely causative gene. The process is time consuming and the yield is low particularly for families with unusual patterns of cancer. We determined the results of in parallel mutation analysis of a large cancer-associated gene panel. We performed deletion analysis and sequenced the coding regions of 45 genes (8 oncogenes and 37 tumor suppressor or DNA repair genes) in 48 childhood cancer patients who also (1) were diagnosed with a second malignancy under age 30, (2) have a sibling diagnosed with cancer under age 30 and/or (3) have a major congenital anomaly or developmental delay. Deleterious mutations were identified in 6 of 48 (13%) families, 4 of which met the sibling criteria. Mutations were identified in genes previously implicated in both dominant and recessive childhood syndromes including SMARCB1, PMS2, and TP53. No pathogenic deletions were identified. This approach has provided efficient identification of childhood cancer susceptibility mutations and will have greater utility as additional cancer susceptibility genes are identified. Integrating parallel analysis of large gene panels into clinical testing will speed results and increase diagnostic yield. The failure to detect mutations in 87% of families highlights that a number of childhood cancer susceptibility genes remain to be discovered. PMID:21356188

  2. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    Science.gov (United States)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  3. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2013-01-01

    Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops and the mat......Power electronics based MicroGrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of parallel connected three-phase VSIs are derived. The proposed voltage and current inner control loops...... control restores the frequency and amplitude deviations produced by the primary control. Also, a synchronization algorithm is presented in order to connect the MicroGrid to the grid. Experimental results are provided to validate the performance and robustness of the parallel VSI system control...

  4. Diffusion tensor tractography of the brainstem pyramidal tract; A study on the optimal reduction factor in parallel imaging

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Yun Jung; Park, Jong Bin; Kim, Jae Hyoung; Choi, Byung Se; Jung, Cheol Kyu [Dept. of of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2016-08-15

    Parallel imaging mitigates susceptibility artifacts that can adversely affect diffusion tensor tractography (DTT) of the pons depending on the reduction (R) factor. We aimed to find the optimal R factor for DTT of the pons that would allow us to visualize the largest possible number of pyramidal tract fibers. Diffusion tensor imaging was performed on 10 healthy subjects at 3 Tesla based on single-shot echo-planar imaging using the following parameters: b value, 1000 s/mm{sup 2}; gradient direction, 15; voxel size, 2 × 2 × 2 mm{sup 3}; and R factors, 1, 2, 3, 4, and 5. DTT of the right and left pyramidal tracts in the pons was conducted in all subjects. Signal-to-noise ratio (SNR), image distortion, and the number of fibers in the tracts were compared across R factors. SNR, image distortion, and fiber number were significantly different according to R factor. Maximal SNR was achieved with an R factor of 2. Image distortion was minimal with an R factor of 5. The number of visible fibers was greatest with an R factor of 3. R factor 3 is optimal for DTT of the pontine pyramidal tract. A balanced consideration of SNR and image distortion, which do not have the same dependence on the R factor, is necessary for DTT of the pons.

  5. Practical enhancement factor model based on GM for multiple parallel reactions: Piperazine (PZ) CO2 capture

    DEFF Research Database (Denmark)

    Gaspar, Jozsef; Fosbøl, Philip Loldrup

    2017-01-01

    Reactive absorption is a key process for gas separation and purification and it is the main technology for CO2 capture. Thus, reliable and simple mathematical models for mass transfer rate calculation are essential. Models which apply to parallel interacting and non-interacting reactions, for all......, desorption and pinch conditions.In this work, we apply the GM model to multiple parallel reactions. We deduce the model for piperazine (PZ) CO2 capture and we validate it against wetted-wall column measurements using 2, 5 and 8 molal PZ for temperatures between 40 °C and 100 °C and CO2 loadings between 0.......23 and 0.41 mol CO2/2 mol PZ. We show that overall second order kinetics describes well the reaction between CO2 and PZ accounting for the carbamate and bicarbamate reactions. Here we prove the GM model for piperazine and MEA but we expect that this practical approach is applicable for various amines...

  6. Performance analysis of a refrigeration system with parallel control of evaporation pressure

    International Nuclear Information System (INIS)

    Lee, Jong Suk

    2008-01-01

    The conventional refrigeration system is composed of a compressor, condenser, receiver, expansion valve or capillary tube, and an evaporator. The refrigeration system used in this study has additional expansion valve and evaporator along with an Evaporation Pressure Regulator(EPR) at the exit side of the evaporator. The two evaporators can be operated at different temperatures according to the opening of the EPR. The experimental results obtained using the refrigeration system with parallel control of evaporation pressure are presented and the performance analysis of the refrigeration system with two evaporators is conducted

  7. A parallel finite element method for the analysis of crystalline solids

    DEFF Research Database (Denmark)

    Sørensen, N.J.; Andersen, B.S.

    1996-01-01

    A parallel finite element method suitable for the analysis of 3D quasi-static crystal plasticity problems has been developed. The method is based on substructuring of the original mesh into a number of substructures which are treated as isolated finite element models related via the interface...... conditions. The resulting interface equations are solved using a direct solution method. The method shows a good speedup when increasing the number of processors from 1 to 8 and the effective solution of 3D crystal plasticity problems whose size is much too large for a single work station becomes possible....

  8. Analysis of IDR(s Family of Solvers for Reservoir Simulations on Different Parallel Architectures

    Directory of Open Access Journals (Sweden)

    Seignole Vincent

    2016-09-01

    Full Text Available The present contribution consists in providing a detailed analysis of several realizations of the IDR(s family of solvers, under different facets: robustness, performance and implementation on different parallel environments in regards of sequential IDR(s resolution implementation tested through several industrial geologically and structurally coherent 3D-field case reservoir models. This work is the result of continuous efforts towards time-response improvement of Storengy’s reservoir three-dimensional simulator named Multi, dedicated to gas-storage applications.

  9. Design and analysis of all-dielectric broadband nonpolarizing parallel-plate beam splitters.

    Science.gov (United States)

    Wang, Wenliang; Xiong, Shengming; Zhang, Yundong

    2007-06-01

    Past research on the all-dielectric nonpolarizing beam splitter is reviewed. With the aid of the needle thin-film synthesis method and the conjugate graduate refine method, three different split ratio nonpolarizing parallel-plate beam splitters over a 200 nm spectral range centered at 550 nm with incidence angles of 45 degrees are designed. The chosen materials component and the initial stack are based on the Costich and Thelen theories. The results of design and analysis show that the designs maintain a very low polarization ratio in the working range of the spectrum and has a reasonable angular field.

  10. Factor Analysis for Clustered Observations.

    Science.gov (United States)

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  11. Transforming Rubrics Using Factor Analysis

    Science.gov (United States)

    Baryla, Ed; Shelley, Gary; Trainor, William

    2012-01-01

    Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…

  12. New Structural Representation and Digital-Analysis Platform for Symmetrical Parallel Mechanisms

    Directory of Open Access Journals (Sweden)

    Wenao Cao

    2013-05-01

    Full Text Available Abstract An automatic design platform capable of automatic structural analysis, structural synthesis and the application of parallel mechanisms will be a great aid in the conceptual design of mechanisms, though up to now such a platform has only existed as an idea. The work in this paper constitutes part of such a platform. Based on the screw theory and a new structural representation method proposed here which builds a one-to-one correspondence between the strings of representative characters and the kinematic structures of symmetrical parallel mechanisms (SPMs, this paper develops a fully-automatic approach for mobility (degree-of-freedom analysis, and further establishes an automatic digital-analysis platform for SPMs. With this platform, users simply have to enter the strings of representative characters, and the kinematic structures of the SPMs will be generated and displayed automatically, and the mobility and its properties will also be analysed and displayed automatically. Typical examples are provided to show the effectiveness of the approach.

  13. Massively Parallel, Molecular Analysis Platform Developed Using a CMOS Integrated Circuit With Biological Nanopores

    Science.gov (United States)

    Roever, Stefan

    2012-01-01

    A massively parallel, low cost molecular analysis platform will dramatically change the nature of protein, molecular and genomics research, DNA sequencing, and ultimately, molecular diagnostics. An integrated circuit (IC) with 264 sensors was fabricated using standard CMOS semiconductor processing technology. Each of these sensors is individually controlled with precision analog circuitry and is capable of single molecule measurements. Under electronic and software control, the IC was used to demonstrate the feasibility of creating and detecting lipid bilayers and biological nanopores using wild type α-hemolysin. The ability to dynamically create bilayers over each of the sensors will greatly accelerate pore development and pore mutation analysis. In addition, the noise performance of the IC was measured to be 30fA(rms). With this noise performance, single base detection of DNA was demonstrated using α-hemolysin. The data shows that a single molecule, electrical detection platform using biological nanopores can be operationalized and can ultimately scale to millions of sensors. Such a massively parallel platform will revolutionize molecular analysis and will completely change the field of molecular diagnostics in the future.

  14. VALIDATION OF CRACK INTERACTION LIMIT MODEL FOR PARALLEL EDGE CRACKS USING TWO-DIMENSIONAL FINITE ELEMENT ANALYSIS

    Directory of Open Access Journals (Sweden)

    R. Daud

    2013-06-01

    Full Text Available Shielding interaction effects of two parallel edge cracks in finite thickness plates subjected to remote tension load is analyzed using a developed finite element analysis program. In the present study, the crack interaction limit is evaluated based on the fitness of service (FFS code, and focus is given to the weak crack interaction region as the crack interval exceeds the length of cracks (b > a. Crack interaction factors are evaluated based on stress intensity factors (SIFs for Mode I SIFs using a displacement extrapolation technique. Parametric studies involved a wide range of crack-to-width (0.05 ≤ a/W ≤ 0.5 and crack interval ratios (b/a > 1. For validation, crack interaction factors are compared with single edge crack SIFs as a state of zero interaction. Within the considered range of parameters, the proposed numerical evaluation used to predict the crack interaction factor reduces the error of existing analytical solution from 1.92% to 0.97% at higher a/W. In reference to FFS codes, the small discrepancy in the prediction of the crack interaction factor validates the reliability of the numerical model to predict crack interaction limits under shielding interaction effects. In conclusion, the numerical model gave a successful prediction in estimating the crack interaction limit, which can be used as a reference for the shielding orientation of other cracks.

  15. ASSET: Analysis of Sequences of Synchronous Events in Massively Parallel Spike Trains

    Science.gov (United States)

    Canova, Carlos; Denker, Michael; Gerstein, George; Helias, Moritz

    2016-01-01

    With the ability to observe the activity from large numbers of neurons simultaneously using modern recording technologies, the chance to identify sub-networks involved in coordinated processing increases. Sequences of synchronous spike events (SSEs) constitute one type of such coordinated spiking that propagates activity in a temporally precise manner. The synfire chain was proposed as one potential model for such network processing. Previous work introduced a method for visualization of SSEs in massively parallel spike trains, based on an intersection matrix that contains in each entry the degree of overlap of active neurons in two corresponding time bins. Repeated SSEs are reflected in the matrix as diagonal structures of high overlap values. The method as such, however, leaves the task of identifying these diagonal structures to visual inspection rather than to a quantitative analysis. Here we present ASSET (Analysis of Sequences of Synchronous EvenTs), an improved, fully automated method which determines diagonal structures in the intersection matrix by a robust mathematical procedure. The method consists of a sequence of steps that i) assess which entries in the matrix potentially belong to a diagonal structure, ii) cluster these entries into individual diagonal structures and iii) determine the neurons composing the associated SSEs. We employ parallel point processes generated by stochastic simulations as test data to demonstrate the performance of the method under a wide range of realistic scenarios, including different types of non-stationarity of the spiking activity and different correlation structures. Finally, the ability of the method to discover SSEs is demonstrated on complex data from large network simulations with embedded synfire chains. Thus, ASSET represents an effective and efficient tool to analyze massively parallel spike data for temporal sequences of synchronous activity. PMID:27420734

  16. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems.

    Science.gov (United States)

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C M A; Saltz, Joel

    2017-09-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies.

  17. Open | SpeedShop: An Open Source Infrastructure for Parallel Performance Analysis

    Directory of Open Access Journals (Sweden)

    Martin Schulz

    2008-01-01

    Full Text Available Over the last decades a large number of performance tools has been developed to analyze and optimize high performance applications. Their acceptance by end users, however, has been slow: each tool alone is often limited in scope and comes with widely varying interfaces and workflow constraints, requiring different changes in the often complex build and execution infrastructure of the target application. We started the Open | SpeedShop project about 3 years ago to overcome these limitations and provide efficient, easy to apply, and integrated performance analysis for parallel systems. Open | SpeedShop has two different faces: it provides an interoperable tool set covering the most common analysis steps as well as a comprehensive plugin infrastructure for building new tools. In both cases, the tools can be deployed to large scale parallel applications using DPCL/Dyninst for distributed binary instrumentation. Further, all tools developed within or on top of Open | SpeedShop are accessible through multiple fully equivalent interfaces including an easy-to-use GUI as well as an interactive command line interface reducing the usage threshold for those tools.

  18. ADaCGH: A parallelized web-based application and R package for the analysis of aCGH data.

    Directory of Open Access Journals (Sweden)

    Ramón Díaz-Uriarte

    Full Text Available BACKGROUND: Copy number alterations (CNAs in genomic DNA have been associated with complex human diseases, including cancer. One of the most common techniques to detect CNAs is array-based comparative genomic hybridization (aCGH. The availability of aCGH platforms and the need for identification of CNAs has resulted in a wealth of methodological studies. METHODOLOGY/PRINCIPAL FINDINGS: ADaCGH is an R package and a web-based application for the analysis of aCGH data. It implements eight methods for detection of CNAs, gains and losses of genomic DNA, including all of the best performing ones from two recent reviews (CBS, GLAD, CGHseg, HMM. For improved speed, we use parallel computing (via MPI. Additional information (GO terms, PubMed citations, KEGG and Reactome pathways is available for individual genes, and for sets of genes with altered copy numbers. CONCLUSIONS/SIGNIFICANCE: ADACGH represents a qualitative increase in the standards of these types of applications: a all of the best performing algorithms are included, not just one or two; b we do not limit ourselves to providing a thin layer of CGI on top of existing BioConductor packages, but instead carefully use parallelization, examining different schemes, and are able to achieve significant decreases in user waiting time (factors up to 45x; c we have added functionality not currently available in some methods, to adapt to recent recommendations (e.g., merging of segmentation results in wavelet-based and CGHseg algorithms; d we incorporate redundancy, fault-tolerance and checkpointing, which are unique among web-based, parallelized applications; e all of the code is available under open source licenses, allowing to build upon, copy, and adapt our code for other software projects.

  19. Fourier analysis of parallel block-Jacobi splitting with transport synthetic acceleration in two-dimensional geometry

    International Nuclear Information System (INIS)

    Rosa, M.; Warsa, J. S.; Chang, J. H.

    2007-01-01

    A Fourier analysis is conducted in two-dimensional (2D) Cartesian geometry for the discrete-ordinates (SN) approximation of the neutron transport problem solved with Richardson iteration (Source Iteration) and Richardson iteration preconditioned with Transport Synthetic Acceleration (TSA), using the Parallel Block-Jacobi (PBJ) algorithm. The results for the un-accelerated algorithm show that convergence of PBJ can degrade, leading in particular to stagnation of GMRES(m) in problems containing optically thin sub-domains. The results for the accelerated algorithm indicate that TSA can be used to efficiently precondition an iterative method in the optically thin case when implemented in the 'modified' version MTSA, in which only the scattering in the low order equations is reduced by some non-negative factor β<1. (authors)

  20. Development of GPU Based Parallel Computing Module for Solving Pressure Equation in the CUPID Component Thermo-Fluid Analysis Code

    International Nuclear Information System (INIS)

    Lee, Jin Pyo; Joo, Han Gyu

    2010-01-01

    In the thermo-fluid analysis code named CUPID, the linear system of pressure equations must be solved in each iteration step. The time for repeatedly solving the linear system can be quite significant because large sparse matrices of Rank more than 50,000 are involved and the diagonal dominance of the system is hardly hold. Therefore parallelization of the linear system solver is essential to reduce the computing time. Meanwhile, Graphics Processing Units (GPU) have been developed as highly parallel, multi-core processors for the global demand of high quality 3D graphics. If a suitable interface is provided, parallelization using GPU can be available to engineering computing. NVIDIA provides a Software Development Kit(SDK) named CUDA(Compute Unified Device Architecture) to code developers so that they can manage GPUs for parallelization using the C language. In this research, we implement parallel routines for the linear system solver using CUDA, and examine the performance of the parallelization. In the next section, we will describe the method of CUDA parallelization for the CUPID code, and then the performance of the CUDA parallelization will be discussed

  1. Parallel imaging of Drosophila embryos for quantitative analysis of genetic perturbations of the Ras pathway

    Directory of Open Access Journals (Sweden)

    Yogesh Goyal

    2017-07-01

    Full Text Available The Ras pathway patterns the poles of the Drosophila embryo by downregulating the levels and activity of a DNA-binding transcriptional repressor Capicua (Cic. We demonstrate that the spatiotemporal pattern of Cic during this signaling event can be harnessed for functional studies of mutations in the Ras pathway in human diseases. Our approach relies on a new microfluidic device that enables parallel imaging of Cic dynamics in dozens of live embryos. We found that although the pattern of Cic in early embryos is complex, it can be accurately approximated by a product of one spatial profile and one time-dependent amplitude. Analysis of these functions of space and time alone reveals the differential effects of mutations within the Ras pathway. Given the highly conserved nature of Ras-dependent control of Cic, our approach provides new opportunities for functional analysis of multiple sequence variants from developmental abnormalities and cancers.

  2. Gravitational Waves: Search Results, Data Analysis and Parameter Estimation. Amaldi 10 Parallel Session C2

    Science.gov (United States)

    Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi

    2015-01-01

    The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

  3. Gravitational waves: search results, data analysis and parameter estimation: Amaldi 10 Parallel session C2.

    Science.gov (United States)

    Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G

    The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

  4. Time complexity analysis for distributed memory computers: implementation of parallel conjugate gradient method

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Haan, M.J.; Hertzberger, L.O.; van Leeuwen, J.

    1991-01-01

    New developments in Computer Science, both hardware and software, offer researchers, such as physicists, unprecedented possibilities to solve their computational intensive problems.However, full exploitation of e.g. new massively parallel computers, parallel languages or runtime environments

  5. Advanced mathematical on-line analysis in nuclear experiments. Usage of parallel computing CUDA routines in standard root analysis

    Directory of Open Access Journals (Sweden)

    Grzeszczuk A.

    2015-01-01

    Full Text Available Compute Unified Device Architecture (CUDA is a parallel computing platform developed by Nvidia for increase speed of graphics by usage of parallel mode for processes calculation. The success of this solution has opened technology General-Purpose Graphic Processor Units (GPGPUs for applications not coupled with graphics. The GPGPUs system can be applying as effective tool for reducing huge number of data for pulse shape analysis measures, by on-line recalculation or by very quick system of compression. The simplified structure of CUDA system and model of programming based on example Nvidia GForce GTX580 card are presented by our poster contribution in stand-alone version and as ROOT application.

  6. Performance and scalability analysis of teraflop-scale parallel architectures using multidimensional wavefront applications

    International Nuclear Information System (INIS)

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, they analyze two problem sizes. The model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor

  7. Mesh Partitioning Algorithm Based on Parallel Finite Element Analysis and Its Actualization

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2013-01-01

    Full Text Available In parallel computing based on finite element analysis, domain decomposition is a key technique for its preprocessing. Generally, a domain decomposition of a mesh can be realized through partitioning of a graph which is converted from a finite element mesh. This paper discusses the method for graph partitioning and the way to actualize mesh partitioning. Relevant softwares are introduced, and the data structure and key functions of Metis and ParMetis are introduced. The writing, compiling, and testing of the mesh partitioning interface program based on these key functions are performed. The results indicate some objective law and characteristics to guide the users who use the graph partitioning algorithm and software to write PFEM program, and ideal partitioning effects can be achieved by actualizing mesh partitioning through the program. The interface program can also be used directly by the engineering researchers as a module of the PFEM software. So that it can reduce the application of the threshold of graph partitioning algorithm, improve the calculation efficiency, and promote the application of graph theory and parallel computing.

  8. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  9. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis.

    Science.gov (United States)

    Wang, Ting; Plecháč, Petr

    2017-12-21

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  10. PERFORMANCE ANALYSIS BETWEEN EXPLICIT SCHEDULING AND IMPLICIT SCHEDULING OF PARALLEL ARRAY-BASED DOMAIN DECOMPOSITION USING OPENMP

    Directory of Open Access Journals (Sweden)

    MOHAMMED FAIZ ABOALMAALY

    2014-10-01

    Full Text Available With the continuous revolution of multicore architecture, several parallel programming platforms have been introduced in order to pave the way for fast and efficient development of parallel algorithms. Back into its categories, parallel computing can be done through two forms: Data-Level Parallelism (DLP or Task-Level Parallelism (TLP. The former can be done by the distribution of data among the available processing elements while the latter is based on executing independent tasks concurrently. Most of the parallel programming platforms have built-in techniques to distribute the data among processors, these techniques are technically known as automatic distribution (scheduling. However, due to their wide range of purposes, variation of data types, amount of distributed data, possibility of extra computational overhead and other hardware-dependent factors, manual distribution could achieve better outcomes in terms of performance when compared to the automatic distribution. In this paper, this assumption is investigated by conducting a comparison between automatic and our newly proposed manual distribution of data among threads in parallel. Empirical results of matrix addition and matrix multiplication show a considerable performance gain when manual distribution is applied against automatic distribution.

  11. Effect of Couple Stresses on the Stress Intensity Factors for Two Parallel Cracks in an Infinite Elastic Medium under Tension

    Directory of Open Access Journals (Sweden)

    Shouetsu Itou

    2012-01-01

    Full Text Available Stresses around two parallel cracks of equal length in an infinite elastic medium are evaluated based on the linearized couple-stress theory under uniform tension normal to the cracks. Fourier transformations are used to reduce the boundary conditions with respect to the upper crack to dual integral equations. In order to solve these equations, the differences in the displacements and in the rotation at the upper crack are expanded through a series of functions that are zero valued outside the crack. The unknown coefficients in each series are solved in order to satisfy the boundary conditions inside the crack using the Schmidt method. The stresses are expressed in terms of infinite integrals, and the stress intensity factors can be determined using the characteristics of the integrands for an infinite value of the variable of integration. Numerical calculations are carried out for selected crack configurations, and the effect of the couple stresses on the stress intensity factors is revealed.

  12. Layout design and energetic analysis of a complex diesel parallel hybrid electric vehicle

    International Nuclear Information System (INIS)

    Finesso, Roberto; Spessa, Ezio; Venditti, Mattia

    2014-01-01

    Highlights: • Layout design, energetic and cost analysis of complex parallel hybrid vehicles. • Development of global and real-time optimizers for control strategy identification. • Rule-based control strategies to minimize fuel consumption and NO x . • Energy share across each working mode for battery and thermal engine. - Abstract: The present paper is focused on the design, optimization and analysis of a complex parallel hybrid electric vehicle, equipped with two electric machines on both the front and rear axles, and on the evaluation of its potential to reduce fuel consumption and NO x emissions over several driving missions. The vehicle has been compared with two conventional parallel hybrid vehicles, equipped with a single electric machine on the front axle or on the rear axle, as well as with a conventional vehicle. All the vehicles have been equipped with compression ignition engines. The optimal layout of each vehicle was identified on the basis of the minimization of the overall powertrain costs during the whole vehicle life. These costs include the initial investment due to the production of the components as well as the operating costs related to fuel consumption and to battery depletion. Identification of the optimal powertrain control strategy, in terms of the management of the power flows of the engine and electric machines, and of gear selection, is necessary in order to be able to fully exploit the potential of the hybrid architecture. To this end, two global optimizers, one of a deterministic nature and another of a stochastic type, and two real-time optimizers have been developed, applied and compared. A new mathematical technique has been developed and applied to the vehicle simulation model in order to decrease the computational time of the optimizers. First, the vehicle model equations were written in order to allow a coarse time grid to be used, then, the control variables (i.e., power flow and gear number) were discretized, and the

  13. Design, analysis and control of cable-suspended parallel robots and its applications

    CERN Document Server

    Zi, Bin

    2017-01-01

    This book provides an essential overview of the authors’ work in the field of cable-suspended parallel robots, focusing on innovative design, mechanics, control, development and applications. It presents and analyzes several typical mechanical architectures of cable-suspended parallel robots in practical applications, including the feed cable-suspended structure for super antennae, hybrid-driven-based cable-suspended parallel robots, and cooperative cable parallel manipulators for multiple mobile cranes. It also addresses the fundamental mechanics of cable-suspended parallel robots on the basis of their typical applications, including the kinematics, dynamics and trajectory tracking control of the feed cable-suspended structure for super antennae. In addition it proposes a novel hybrid-driven-based cable-suspended parallel robot that uses integrated mechanism design methods to improve the performance of traditional cable-suspended parallel robots. A comparative study on error and performance indices of hybr...

  14. Advanced exergy analysis of a R744 booster refrigeration system with parallel compression

    DEFF Research Database (Denmark)

    Gullo, Paride; Elmegaard, Brian; Cortella, Giovanni

    2016-01-01

    In this paper, the advanced exergy analysis was applied to a R744 booster refrigeration system with parallel compression taking into account the design external temperatures of 25 degrees C and 35 degrees C, as well as the operating conditions of a conventional European supermarket. The global...... efficiencies of all the chosen compressors were extrapolated from some manufactures' data and appropriated optimization procedures of the performance of the investigated solution were implemented.According to the results associated with the conventional exergy evaluation, the gas cooler/condenser, the HS (high...... stage) compressor and the MT (medium temperature) display cabinet exhibited the highest enhancement potential. The further splitting of their corresponding exergy destruction rates into their different parts and the following assessment of the interactions among the components allowed figuring out...

  15. Power Factor Correction Capacitors for Multiple Parallel Three-Phase ASD Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Today’s three-phase Adjustable Speed Drive (ASD) systems still employ Diode Rectifiers (DRs) and Silicon-Controlled Rectifiers (SCRs) as the front-end converters due to structural and control simplicity, small volume, low cost, and high reliability. However, the uncontrollable DRs and phase......-controllable SCRs bring side-effects by injecting high harmonics to the grid, which will degrade the system performance in terms of lowering the overall efficiency and overheating the system if remain uncontrolled or unattenuated. For multiple ASD systems, certain harmonics in the entire system can be mitigated...... the power factor, passive capacitors can be installed, which yet can trigger the system resonance. Hence, this paper analyzes the resonant issues in multiple ASD systems with power factor correction capacitors. Potential damping solutions are summarized. Simulations are carried out, while laboratory tests...

  16. Parallel Expansions of Sox Transcription Factor Group B Predating the Diversifications of the Arthropods and Jawed Vertebrates

    Science.gov (United States)

    Zhong, Lei; Wang, Dengqiang; Gan, Xiaoni; Yang, Tong; He, Shunping

    2011-01-01

    Group B of the Sox transcription factor family is crucial in embryo development in the insects and vertebrates. Sox group B, unlike the other Sox groups, has an unusually enlarged functional repertoire in insects, but the timing and mechanism of the expansion of this group were unclear. We collected and analyzed data for Sox group B from 36 species of 12 phyla representing the major metazoan clades, with an emphasis on arthropods, to reconstruct the evolutionary history of SoxB in bilaterians and to date the expansion of Sox group B in insects. We found that the genome of the bilaterian last common ancestor probably contained one SoxB1 and one SoxB2 gene only and that tandem duplications of SoxB2 occurred before the arthropod diversification but after the arthropod-nematode divergence, resulting in the basal repertoire of Sox group B in diverse arthropod lineages. The arthropod Sox group B repertoire expanded differently from the vertebrate repertoire, which resulted from genome duplications. The parallel increases in the Sox group B repertoires of the arthropods and vertebrates are consistent with the parallel increases in the complexity and diversification of these two important organismal groups. PMID:21305035

  17. Kinematics/statics analysis of a novel serial-parallel robotic arm with hand

    International Nuclear Information System (INIS)

    Lu, Yi; Dai, Zhuohong; Ye, Nijia; Wang, Peng

    2015-01-01

    A robotic arm with fingered hand generally has multi-functions to complete various complicated operations. A novel serial-parallel robotic arm with a hand is proposed and its kinematics and statics are studied systematically. A 3D prototype of the serial-parallel robotic arm with a hand is constructed and analyzed by simulation. The serial-parallel robotic arm with a hand is composed of an upper 3RPS parallel manipulator, a lower 3SPR parallel manipulator and a hand with three finger mechanisms. Its kinematics formulae for solving the displacement, velocity, acceleration of are derived. Its statics formula for solving the active/constrained forces is derived. Its reachable workspace and orientation workspace are constructed and analyzed. Finally, an analytic example is given for solving the kinematics and statics of the serial-parallel robotic arm with a hand and the analytic solutions are verified by a simulation mechanism.

  18. Kinematics/statics analysis of a novel serial-parallel robotic arm with hand

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yi; Dai, Zhuohong; Ye, Nijia; Wang, Peng [Yanshan University, Hebei (China)

    2015-10-15

    A robotic arm with fingered hand generally has multi-functions to complete various complicated operations. A novel serial-parallel robotic arm with a hand is proposed and its kinematics and statics are studied systematically. A 3D prototype of the serial-parallel robotic arm with a hand is constructed and analyzed by simulation. The serial-parallel robotic arm with a hand is composed of an upper 3RPS parallel manipulator, a lower 3SPR parallel manipulator and a hand with three finger mechanisms. Its kinematics formulae for solving the displacement, velocity, acceleration of are derived. Its statics formula for solving the active/constrained forces is derived. Its reachable workspace and orientation workspace are constructed and analyzed. Finally, an analytic example is given for solving the kinematics and statics of the serial-parallel robotic arm with a hand and the analytic solutions are verified by a simulation mechanism.

  19. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Kwan-Liu [Univ. of California, Davis, CA (United States)

    2017-02-01

    efficient computation on an exascale computer. This project concludes with a functional prototype containing pervasively parallel algorithms that perform demonstratively well on many-core processors. These algorithms are fundamental for performing data analysis and visualization at extreme scale.

  20. Energization of Long HVAC Cables in Parallel - Analysis and Estimation Formulas

    DEFF Research Database (Denmark)

    Silva, Filipe Faria Da; Bak, Claus Leth

    2012-01-01

    The installation of long HVAC cables has recently become more common and it tends to increase during the next years. Consequently, the energization of long HVAC cables in parallel is also a more common condition. The energization of HVAC cables in parallel resembles the en-ergization of capacitor...... has several simplifications and does not always provide accurate results. This paper proposes a new formula that can be used for the estimation of these two quantities for two HVAC cables in parallel....

  1. Parallel analysis and orthogonal identification of N-glycans with different capillary electrophoresis mechanisms

    International Nuclear Information System (INIS)

    Feng, Hua-tao; Su, Min; Rifai, Farida Nur; Li, Pingjing; Li, Sam F.Y.

    2017-01-01

    The deep involvement of glycans or carbohydrate moieties in biological processes makes glycan patterns an important direction for the clinical and medicine researches. A multiplexing CE mapping method for glycan analysis was developed in this study. By applying different CE separation mechanisms, the potential of combined parallel applications of capillary zone electrophoresis (CZE), micellar electrokinetic chromatography (MEKC) and capillary gel electrophoresis (CGE) for rapid and accurate identification of glycan was investigated. The combination of CZE and MEKC demonstrated enhancing chromatography separation capacity without the compromises of sample pre-treatment and glycan concentration. The separation mechanisms for multiplexing platform were selected based on the orthogonalities of the separation of glycan standards. MEKC method exhibited promising ability for the analysis of small GU value glycans and thus complementing the unavailability of CZE. The method established required only small amount of samples, simple instrument and single fluorescent labelling for sensitive detection. This integrated method can be used to search important glycan patterns appearing in biopharmaceutical products and other glycoproteins with clinical importance. - Highlights: • Cross-validation of analytes in complex samples was done with different CE separation mechanisms. • A simple strategy is used to confirm peak identification and extend capacity of CE separation. • The method uses small amount of sample, simple instrument and single fluorescent labeling. • Selection of mechanisms is based on orthogonalities of GU values of glycan standards. • Micellar electrokinetic chromatography was suitable for analysis of small or highly sialylated glycans.

  2. Parallel analysis and orthogonal identification of N-glycans with different capillary electrophoresis mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Hua-tao [Department of Chemistry, National University of Singapore, 3 Science Drive 3, Singapore 117543 (Singapore); NUS Environmental Research Institute, 5A Engineering Drive 1, T-Lab Building, Singapore 117411 (Singapore); Su, Min; Rifai, Farida Nur [Department of Chemistry, National University of Singapore, 3 Science Drive 3, Singapore 117543 (Singapore); Li, Pingjing [NUS Environmental Research Institute, 5A Engineering Drive 1, T-Lab Building, Singapore 117411 (Singapore); Li, Sam F.Y., E-mail: chmlifys@nus.edu.sg [Department of Chemistry, National University of Singapore, 3 Science Drive 3, Singapore 117543 (Singapore); NUS Environmental Research Institute, 5A Engineering Drive 1, T-Lab Building, Singapore 117411 (Singapore)

    2017-02-08

    The deep involvement of glycans or carbohydrate moieties in biological processes makes glycan patterns an important direction for the clinical and medicine researches. A multiplexing CE mapping method for glycan analysis was developed in this study. By applying different CE separation mechanisms, the potential of combined parallel applications of capillary zone electrophoresis (CZE), micellar electrokinetic chromatography (MEKC) and capillary gel electrophoresis (CGE) for rapid and accurate identification of glycan was investigated. The combination of CZE and MEKC demonstrated enhancing chromatography separation capacity without the compromises of sample pre-treatment and glycan concentration. The separation mechanisms for multiplexing platform were selected based on the orthogonalities of the separation of glycan standards. MEKC method exhibited promising ability for the analysis of small GU value glycans and thus complementing the unavailability of CZE. The method established required only small amount of samples, simple instrument and single fluorescent labelling for sensitive detection. This integrated method can be used to search important glycan patterns appearing in biopharmaceutical products and other glycoproteins with clinical importance. - Highlights: • Cross-validation of analytes in complex samples was done with different CE separation mechanisms. • A simple strategy is used to confirm peak identification and extend capacity of CE separation. • The method uses small amount of sample, simple instrument and single fluorescent labeling. • Selection of mechanisms is based on orthogonalities of GU values of glycan standards. • Micellar electrokinetic chromatography was suitable for analysis of small or highly sialylated glycans.

  3. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    Science.gov (United States)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  4. QuASAR-MPRA: accurate allele-specific analysis for massively parallel reporter assays.

    Science.gov (United States)

    Kalita, Cynthia A; Moyerbrailean, Gregory A; Brown, Christopher; Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger

    2018-03-01

    The majority of the human genome is composed of non-coding regions containing regulatory elements such as enhancers, which are crucial for controlling gene expression. Many variants associated with complex traits are in these regions, and may disrupt gene regulatory sequences. Consequently, it is important to not only identify true enhancers but also to test if a variant within an enhancer affects gene regulation. Recently, allele-specific analysis in high-throughput reporter assays, such as massively parallel reporter assays (MPRAs), have been used to functionally validate non-coding variants. However, we are still missing high-quality and robust data analysis tools for these datasets. We have further developed our method for allele-specific analysis QuASAR (quantitative allele-specific analysis of reads) to analyze allele-specific signals in barcoded read counts data from MPRA. Using this approach, we can take into account the uncertainty on the original plasmid proportions, over-dispersion, and sequencing errors. The provided allelic skew estimate and its standard error also simplifies meta-analysis of replicate experiments. Additionally, we show that a beta-binomial distribution better models the variability present in the allelic imbalance of these synthetic reporters and results in a test that is statistically well calibrated under the null. Applying this approach to the MPRA data, we found 602 SNPs with significant (false discovery rate 10%) allele-specific regulatory function in LCLs. We also show that we can combine MPRA with QuASAR estimates to validate existing experimental and computational annotations of regulatory variants. Our study shows that with appropriate data analysis tools, we can improve the power to detect allelic effects in high-throughput reporter assays. http://github.com/piquelab/QuASAR/tree/master/mpra. fluca@wayne.edu or rpique@wayne.edu. Supplementary data are available online at Bioinformatics. © The Author (2017). Published by

  5. Kinematics analysis of a novel planar parallel manipulator with kinematic redundancy

    Energy Technology Data Exchange (ETDEWEB)

    Qu, Haibo; Guo, Sheng [Beijing Jiaotong University, Beijing (China)

    2017-04-15

    In this paper, a novel planar parallel manipulator with kinematic redundancy is proposed. First, the Degrees of freedom (DOF) of the whole parallel manipulator and the Relative DOF (RDOF) between the moving platform and fixed base are studied. The results indicate that the proposed mechanism is kinematically redundant. Then, the kinematics, Jacobian matrices and workspace of this proposed parallel manipulator with kinematic redundancy are analyzed. Finally, the statics simulation of the proposed parallel manipulator is performed. The obtained stress and displacement distribution can be used to determine the easily destroyed place in the mechanism configurations.

  6. Kinematics analysis of a novel planar parallel manipulator with kinematic redundancy

    International Nuclear Information System (INIS)

    Qu, Haibo; Guo, Sheng

    2017-01-01

    In this paper, a novel planar parallel manipulator with kinematic redundancy is proposed. First, the Degrees of freedom (DOF) of the whole parallel manipulator and the Relative DOF (RDOF) between the moving platform and fixed base are studied. The results indicate that the proposed mechanism is kinematically redundant. Then, the kinematics, Jacobian matrices and workspace of this proposed parallel manipulator with kinematic redundancy are analyzed. Finally, the statics simulation of the proposed parallel manipulator is performed. The obtained stress and displacement distribution can be used to determine the easily destroyed place in the mechanism configurations

  7. Stress intensity factors of three parallel edge cracks under bending moments

    International Nuclear Information System (INIS)

    Ismail, A E

    2013-01-01

    This paper reports the study of stress intensity factors (SIF) of three edge cracks in a finite plate under bending moments. The goal of this paper was to analyze the three edge crack interactions under such loading. Several studies can be found in literature discussing on mode I SIF. However, most of these studies obtained the SIFs using tensile force. Lack of SIF reported discussing on the SIFs obtained under bending moments. ANSYS finite element program was used to develop the finite element model where singular elements were used to model the cracks. Different crack geometries and parameters were utilized in order to characterize the SIFs. According to the present results, crack geometries played a significant role in determining the SIFs and consequently induced the crack interaction mechanisms

  8. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    Science.gov (United States)

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  9. Nuclear respiratory factor 2 regulates the expression of the same NMDA receptor subunit genes as NRF-1: both factors act by a concurrent and parallel mechanism to couple energy metabolism and synaptic transmission.

    Science.gov (United States)

    Priya, Anusha; Johar, Kaid; Wong-Riley, Margaret T T

    2013-01-01

    Neuronal activity and energy metabolism are tightly coupled processes. Previously, we found that nuclear respiratory factor 1 (NRF-1) transcriptionally co-regulates energy metabolism and neuronal activity by regulating all 13 subunits of the critical energy generating enzyme, cytochrome c oxidase (COX), as well as N-methyl-d-aspartate (NMDA) receptor subunits 1 and 2B, GluN1 (Grin1) and GluN2B (Grin2b). We also found that another transcription factor, nuclear respiratory factor 2 (NRF-2 or GA-binding protein) regulates all subunits of COX as well. The goal of the present study was to test our hypothesis that NRF-2 also regulates specific subunits of NMDA receptors, and that it functions with NRF-1 via one of three mechanisms: complementary, concurrent and parallel, or a combination of complementary and concurrent/parallel. By means of multiple approaches, including in silico analysis, electrophoretic mobility shift and supershift assays, in vivo chromatin immunoprecipitation of mouse neuroblastoma cells and rat visual cortical tissue, promoter mutations, real-time quantitative PCR, and western blot analysis, NRF-2 was found to functionally regulate Grin1 and Grin2b genes, but not any other NMDA subunit genes. Grin1 and Grin2b transcripts were up-regulated by depolarizing KCl, but silencing of NRF-2 prevented this up-regulation. On the other hand, over-expression of NRF-2 rescued the down-regulation of these subunits by the impulse blocker TTX. NRF-2 binding sites on Grin1 and Grin2b are conserved among species. Our data indicate that NRF-2 and NRF-1 operate in a concurrent and parallel manner in mediating the tight coupling between energy metabolism and neuronal activity at the molecular level. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. An easy guide to factor analysis

    CERN Document Server

    Kline, Paul

    2014-01-01

    Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a

  11. Parallel O(log n) algorithms for open- and closed-chain rigid multibody systems based on a new mass matrix factorization technique

    Science.gov (United States)

    Fijany, Amir

    1993-01-01

    In this paper, parallel O(log n) algorithms for computation of rigid multibody dynamics are developed. These parallel algorithms are derived by parallelization of new O(n) algorithms for the problem. The underlying feature of these O(n) algorithms is a drastically different strategy for decomposition of interbody force which leads to a new factorization of the mass matrix (M). Specifically, it is shown that a factorization of the inverse of the mass matrix in the form of the Schur Complement is derived as M(exp -1) = C - B(exp *)A(exp -1)B, wherein matrices C, A, and B are block tridiagonal matrices. The new O(n) algorithm is then derived as a recursive implementation of this factorization of M(exp -1). For the closed-chain systems, similar factorizations and O(n) algorithms for computation of Operational Space Mass Matrix lambda and its inverse lambda(exp -1) are also derived. It is shown that these O(n) algorithms are strictly parallel, that is, they are less efficient than other algorithms for serial computation of the problem. But, to our knowledge, they are the only known algorithms that can be parallelized and that lead to both time- and processor-optimal parallel algorithms for the problem, i.e., parallel O(log n) algorithms with O(n) processors. The developed parallel algorithms, in addition to their theoretical significance, are also practical from an implementation point of view due to their simple architectural requirements.

  12. Oscillatory flow at the end of parallel-plate stacks: phenomenological and similarity analysis

    International Nuclear Information System (INIS)

    Mao Xiaoan; Jaworski, Artur J

    2010-01-01

    This paper addresses the physics of the oscillatory flow in the vicinity of a series of parallel plates forming geometrically identical channels. This type of flow is particularly relevant to thermoacoustic engines and refrigerators, where a reciprocating flow is responsible for the desirable energy transfer, but it is also of interest to general fluid mechanics of oscillatory flows past bluff bodies. In this paper, the physics of an acoustically induced flow past a series of plates in an isothermal condition is studied in detail using the data provided by PIV imaging. Particular attention is given to the analysis of the wake flow during the ejection part of the flow cycle, where either closed recirculating vortices or alternating vortex shedding can be observed. This is followed by a similarity analysis of the governing Navier-Stokes equations in order to derive the similarity criteria governing the wake flow behaviour. To this end, similarity numbers including two types of Reynolds number, the Keulegan-Carpenter number and a non-dimensional stack configuration parameter, d/h, are considered and their influence on the phenomena are discussed.

  13. Parallel analysis of tagged deletion mutants efficiently identifies genes involved in endoplasmic reticulum biogenesis.

    Science.gov (United States)

    Wright, Robin; Parrish, Mark L; Cadera, Emily; Larson, Lynnelle; Matson, Clinton K; Garrett-Engele, Philip; Armour, Chris; Lum, Pek Yee; Shoemaker, Daniel D

    2003-07-30

    Increased levels of HMG-CoA reductase induce cell type- and isozyme-specific proliferation of the endoplasmic reticulum. In yeast, the ER proliferations induced by Hmg1p consist of nuclear-associated stacks of smooth ER membranes known as karmellae. To identify genes required for karmellae assembly, we compared the composition of populations of homozygous diploid S. cerevisiae deletion mutants following 20 generations of growth with and without karmellae. Using an initial population of 1,557 deletion mutants, 120 potential mutants were identified as a result of three independent experiments. Each experiment produced a largely non-overlapping set of potential mutants, suggesting that differences in specific growth conditions could be used to maximize the comprehensiveness of similar parallel analysis screens. Only two genes, UBC7 and YAL011W, were identified in all three experiments. Subsequent analysis of individual mutant strains confirmed that each experiment was identifying valid mutations, based on the mutant's sensitivity to elevated HMG-CoA reductase and inability to assemble normal karmellae. The largest class of HMG-CoA reductase-sensitive mutations was a subset of genes that are involved in chromatin structure and transcriptional regulation, suggesting that karmellae assembly requires changes in transcription or that the presence of karmellae may interfere with normal transcriptional regulation. Copyright 2003 John Wiley & Sons, Ltd.

  14. Screw Theory Based Singularity Analysis of Lower-Mobility Parallel Robots considering the Motion/Force Transmissibility and Constrainability

    Directory of Open Access Journals (Sweden)

    Xiang Chen

    2015-01-01

    Full Text Available Singularity is an inherent characteristic of parallel robots and is also a typical mathematical problem in engineering application. In general, to identify singularity configuration, the singular solution in mathematics should be derived. This work introduces an alternative approach to the singularity identification of lower-mobility parallel robots considering the motion/force transmissibility and constrainability. The theory of screws is used as the mathematic tool to define the transmission and constraint indices of parallel robots. The singularity is hereby classified into four types concerning both input and output members of a parallel robot, that is, input transmission singularity, output transmission singularity, input constraint singularity, and output constraint singularity. Furthermore, we take several typical parallel robots as examples to illustrate the process of singularity analysis. Particularly, the input and output constraint singularities which are firstly proposed in this work are depicted in detail. The results demonstrate that the method can not only identify all possible singular configurations, but also explain their physical meanings. Therefore, the proposed approach is proved to be comprehensible and effective in solving singularity problems in parallel mechanisms.

  15. Position Analysis of a Hybrid Serial-Parallel Manipulator in Immersion Lithography

    Directory of Open Access Journals (Sweden)

    Jie-jie Shao

    2015-01-01

    Full Text Available This paper proposes a novel hybrid serial-parallel mechanism with 6 degrees of freedom. The new mechanism combines two different parallel modules in a serial form. 3-P̲(PH parallel module is architecture of 3 degrees of freedom based on higher joints and specializes in describing two planes’ relative pose. 3-P̲SP parallel module is typical architecture which has been widely investigated in recent researches. In this paper, the direct-inverse position problems of the 3-P̲SP parallel module in the couple mixed-type mode are analyzed in detail, and the solutions are obtained in an analytical form. Furthermore, the solutions for the direct and inverse position problems of the novel hybrid serial-parallel mechanism are also derived and obtained in the analytical form. The proposed hybrid serial-parallel mechanism is applied to regulate the immersion hood’s pose in an immersion lithography system. Through measuring and regulating the pose of the immersion hood with respect to the wafer surface simultaneously, the immersion hood can track the wafer surface’s pose in real-time and the gap status is stabilized. This is another exploration to hybrid serial-parallel mechanism’s application.

  16. Parallel manipulators with two end-effectors : Getting a grip on Jacobian-based stiffness analysis

    NARCIS (Netherlands)

    Hoevenaars, A.G.L.

    2016-01-01

    Robots that are developed for applications which require a high stiffness-over-inertia ratio, such as pick-and-place robots, machining robots, or haptic devices, are often based on parallel manipulators. Parallel manipulators connect an end-effector to an inertial base using multiple serial

  17. Unified dataflow model for the analysis of data and pipeline parallelism, and buffer sizing

    NARCIS (Netherlands)

    Hausmans, J.P.H.M.; Geuns, S.J.; Wiggers, M.H.; Bekooij, Marco Jan Gerrit

    2014-01-01

    Real-time stream processing applications such as software defined radios are usually executed concurrently on multiprocessor systems. Exploiting coarse-grained data parallelism by duplicating tasks is often required, besides pipeline parallelism, to meet the temporal constraints of the applications.

  18. High fidelity thermal-hydraulic analysis using CFD and massively parallel computers

    International Nuclear Information System (INIS)

    Weber, D.P.; Wei, T.Y.C.; Brewster, R.A.; Rock, Daniel T.; Rizwan-uddin

    2000-01-01

    Thermal-hydraulic analyses play an important role in design and reload analysis of nuclear power plants. These analyses have historically relied on early generation computational fluid dynamics capabilities, originally developed in the 1960s and 1970s. Over the last twenty years, however, dramatic improvements in both computational fluid dynamics codes in the commercial sector and in computing power have taken place. These developments offer the possibility of performing large scale, high fidelity, core thermal hydraulics analysis. Such analyses will allow a determination of the conservatism employed in traditional design approaches and possibly justify the operation of nuclear power systems at higher powers without compromising safety margins. The objective of this work is to demonstrate such a large scale analysis approach using a state of the art CFD code, STAR-CD, and the computing power of massively parallel computers, provided by IBM. A high fidelity representation of a current generation PWR was analyzed with the STAR-CD CFD code and the results were compared to traditional analyses based on the VIPRE code. Current design methodology typically involves a simplified representation of the assemblies, where a single average pin is used in each assembly to determine the hot assembly from a whole core analysis. After determining this assembly, increased refinement is used in the hot assembly, and possibly some of its neighbors, to refine the analysis for purposes of calculating DNBR. This latter calculation is performed with sub-channel codes such as VIPRE. The modeling simplifications that are used involve the approximate treatment of surrounding assemblies and coarse representation of the hot assembly, where the subchannel is the lowest level of discretization. In the high fidelity analysis performed in this study, both restrictions have been removed. Within the hot assembly, several hundred thousand to several million computational zones have been used, to

  19. Kinematics and dynamics analysis of a novel serial-parallel dynamic simulator

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Bo; Zhang, Lian Dong; Yu, Jingjing [Parallel Robot and Mechatronic System Laboratory of Hebei Province, Yanshan University, Qinhuangdao, Hebei (China)

    2016-11-15

    A serial-parallel dynamics simulator based on serial-parallel manipulator is proposed. According to the dynamics simulator motion requirement, the proposed serial-parallel dynamics simulator formed by 3-RRS (active revolute joint-revolute joint-spherical joint) and 3-SPR (Spherical joint-active prismatic joint-revolute joint) PMs adopts the outer and inner layout. By integrating the kinematics, constraint and coupling information of the 3-RRS and 3-SPR PMs into the serial-parallel manipulator, the inverse Jacobian matrix, velocity, and acceleration of the serial-parallel dynamics simulator are studied. Based on the principle of virtual work and the kinematics model, the inverse dynamic model is established. Finally, the workspace of the (3-RRS)+(3-SPR) dynamics simulator is constructed.

  20. Kinematics and dynamics analysis of a novel serial-parallel dynamic simulator

    International Nuclear Information System (INIS)

    Hu, Bo; Zhang, Lian Dong; Yu, Jingjing

    2016-01-01

    A serial-parallel dynamics simulator based on serial-parallel manipulator is proposed. According to the dynamics simulator motion requirement, the proposed serial-parallel dynamics simulator formed by 3-RRS (active revolute joint-revolute joint-spherical joint) and 3-SPR (Spherical joint-active prismatic joint-revolute joint) PMs adopts the outer and inner layout. By integrating the kinematics, constraint and coupling information of the 3-RRS and 3-SPR PMs into the serial-parallel manipulator, the inverse Jacobian matrix, velocity, and acceleration of the serial-parallel dynamics simulator are studied. Based on the principle of virtual work and the kinematics model, the inverse dynamic model is established. Finally, the workspace of the (3-RRS)+(3-SPR) dynamics simulator is constructed

  1. Automatic analysis (aa: efficient neuroimaging workflows and parallel processing using Matlab and XML

    Directory of Open Access Journals (Sweden)

    Rhodri eCusack

    2015-01-01

    Full Text Available Recent years have seen neuroimaging data becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complex to set up and run (increasing the risk of human error and time consuming to execute (restricting what analyses are attempted. Here we present an open-source framework, automatic analysis (aa, to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (redone. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA. However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast and efficient, for simple single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.

  2. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML.

    Science.gov (United States)

    Cusack, Rhodri; Vicente-Grabovetsky, Alejandro; Mitchell, Daniel J; Wild, Conor J; Auer, Tibor; Linke, Annika C; Peelle, Jonathan E

    2014-01-01

    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address.

  3. Optimizing a massive parallel sequencing workflow for quantitative miRNA expression analysis.

    Directory of Open Access Journals (Sweden)

    Francesca Cordero

    Full Text Available BACKGROUND: Massive Parallel Sequencing methods (MPS can extend and improve the knowledge obtained by conventional microarray technology, both for mRNAs and short non-coding RNAs, e.g. miRNAs. The processing methods used to extract and interpret the information are an important aspect of dealing with the vast amounts of data generated from short read sequencing. Although the number of computational tools for MPS data analysis is constantly growing, their strengths and weaknesses as part of a complex analytical pipe-line have not yet been well investigated. PRIMARY FINDINGS: A benchmark MPS miRNA dataset, resembling a situation in which miRNAs are spiked in biological replication experiments was assembled by merging a publicly available MPS spike-in miRNAs data set with MPS data derived from healthy donor peripheral blood mononuclear cells. Using this data set we observed that short reads counts estimation is strongly under estimated in case of duplicates miRNAs, if whole genome is used as reference. Furthermore, the sensitivity of miRNAs detection is strongly dependent by the primary tool used in the analysis. Within the six aligners tested, specifically devoted to miRNA detection, SHRiMP and MicroRazerS show the highest sensitivity. Differential expression estimation is quite efficient. Within the five tools investigated, two of them (DESseq, baySeq show a very good specificity and sensitivity in the detection of differential expression. CONCLUSIONS: The results provided by our analysis allow the definition of a clear and simple analytical optimized workflow for miRNAs digital quantitative analysis.

  4. Optimizing a massive parallel sequencing workflow for quantitative miRNA expression analysis.

    Science.gov (United States)

    Cordero, Francesca; Beccuti, Marco; Arigoni, Maddalena; Donatelli, Susanna; Calogero, Raffaele A

    2012-01-01

    Massive Parallel Sequencing methods (MPS) can extend and improve the knowledge obtained by conventional microarray technology, both for mRNAs and short non-coding RNAs, e.g. miRNAs. The processing methods used to extract and interpret the information are an important aspect of dealing with the vast amounts of data generated from short read sequencing. Although the number of computational tools for MPS data analysis is constantly growing, their strengths and weaknesses as part of a complex analytical pipe-line have not yet been well investigated. A benchmark MPS miRNA dataset, resembling a situation in which miRNAs are spiked in biological replication experiments was assembled by merging a publicly available MPS spike-in miRNAs data set with MPS data derived from healthy donor peripheral blood mononuclear cells. Using this data set we observed that short reads counts estimation is strongly under estimated in case of duplicates miRNAs, if whole genome is used as reference. Furthermore, the sensitivity of miRNAs detection is strongly dependent by the primary tool used in the analysis. Within the six aligners tested, specifically devoted to miRNA detection, SHRiMP and MicroRazerS show the highest sensitivity. Differential expression estimation is quite efficient. Within the five tools investigated, two of them (DESseq, baySeq) show a very good specificity and sensitivity in the detection of differential expression. The results provided by our analysis allow the definition of a clear and simple analytical optimized workflow for miRNAs digital quantitative analysis.

  5. Parallel experimental design and multivariate analysis provides efficient screening of cell culture media supplements to improve biosimilar product quality.

    Science.gov (United States)

    Brühlmann, David; Sokolov, Michael; Butté, Alessandro; Sauer, Markus; Hemberger, Jürgen; Souquet, Jonathan; Broly, Hervé; Jordan, Martin

    2017-07-01

    Rational and high-throughput optimization of mammalian cell culture media has a great potential to modulate recombinant protein product quality. We present a process design method based on parallel design-of-experiment (DoE) of CHO fed-batch cultures in 96-deepwell plates to modulate monoclonal antibody (mAb) glycosylation using medium supplements. To reduce the risk of losing valuable information in an intricate joint screening, 17 compounds were separated into five different groups, considering their mode of biological action. The concentration ranges of the medium supplements were defined according to information encountered in the literature and in-house experience. The screening experiments produced wide glycosylation pattern ranges. Multivariate analysis including principal component analysis and decision trees was used to select the best performing glycosylation modulators. Subsequent D-optimal quadratic design with four factors (three promising compounds and temperature shift) in shake tubes confirmed the outcome of the selection process and provided a solid basis for sequential process development at a larger scale. The glycosylation profile with respect to the specifications for biosimilarity was greatly improved in shake tube experiments: 75% of the conditions were equally close or closer to the specifications for biosimilarity than the best 25% in 96-deepwell plates. Biotechnol. Bioeng. 2017;114: 1448-1458. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Efficiency Analysis of the Parallel Implementation of the SIMPLE Algorithm on Multiprocessor Computers

    Science.gov (United States)

    Lashkin, S. V.; Kozelkov, A. S.; Yalozo, A. V.; Gerasimov, V. Yu.; Zelensky, D. K.

    2017-12-01

    This paper describes the details of the parallel implementation of the SIMPLE algorithm for numerical solution of the Navier-Stokes system of equations on arbitrary unstructured grids. The iteration schemes for the serial and parallel versions of the SIMPLE algorithm are implemented. In the description of the parallel implementation, special attention is paid to computational data exchange among processors under the condition of the grid model decomposition using fictitious cells. We discuss the specific features for the storage of distributed matrices and implementation of vector-matrix operations in parallel mode. It is shown that the proposed way of matrix storage reduces the number of interprocessor exchanges. A series of numerical experiments illustrates the effect of the multigrid SLAE solver tuning on the general efficiency of the algorithm; the tuning involves the types of the cycles used (V, W, and F), the number of iterations of a smoothing operator, and the number of cells for coarsening. Two ways (direct and indirect) of efficiency evaluation for parallelization of the numerical algorithm are demonstrated. The paper presents the results of solving some internal and external flow problems with the evaluation of parallelization efficiency by two algorithms. It is shown that the proposed parallel implementation enables efficient computations for the problems on a thousand processors. Based on the results obtained, some general recommendations are made for the optimal tuning of the multigrid solver, as well as for selecting the optimal number of cells per processor.

  7. Development of whole core thermal-hydraulic analysis program ACT. 4. Simplified fuel assembly model and parallelization by MPI

    International Nuclear Information System (INIS)

    Ohshima, Hiroyuki

    2001-10-01

    A whole core thermal-hydraulic analysis program ACT is being developed for the purpose of evaluating detailed in-core thermal hydraulic phenomena of fast reactors including the effect of the flow between wrapper-tube walls (inter-wrapper flow) under various reactor operation conditions. As appropriate boundary conditions in addition to a detailed modeling of the core are essential for accurate simulations of in-core thermal hydraulics, ACT consists of not only fuel assembly and inter-wrapper flow analysis modules but also a heat transport system analysis module that gives response of the plant dynamics to the core model. This report describes incorporation of a simplified model to the fuel assembly analysis module and program parallelization by a message passing method toward large-scale simulations. ACT has a fuel assembly analysis module which can simulate a whole fuel pin bundle in each fuel assembly of the core and, however, it may take much CPU time for a large-scale core simulation. Therefore, a simplified fuel assembly model that is thermal-hydraulically equivalent to the detailed one has been incorporated in order to save the simulation time and resources. This simplified model is applied to several parts of fuel assemblies in a core where the detailed simulation results are not required. With regard to the program parallelization, the calculation load and the data flow of ACT were analyzed and the optimum parallelization has been done including the improvement of the numerical simulation algorithm of ACT. Message Passing Interface (MPI) is applied to data communication between processes and synchronization in parallel calculations. Parallelized ACT was verified through a comparison simulation with the original one. In addition to the above works, input manuals of the core analysis module and the heat transport system analysis module have been prepared. (author)

  8. Performance Analysis of a Threshold-Based Parallel Multiple Beam Selection Scheme for WDM FSO Systems

    KAUST Repository

    Nam, Sung Sik

    2018-04-09

    In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme for a free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has occurred under independent identically distributed Gamma-Gamma fading conditions. To simplify the mathematical analysis, we additionally consider Gamma turbulence conditions, which are a good approximation of Gamma-Gamma distribution. Specifically, we statistically analyze the characteristics in operation under conventional detection schemes (i.e., heterodyne detection (HD) and intensity modulation/direct detection (IM/DD) techniques) for both adaptive modulation (AM) case in addition to non-AM case (i.e., coherent/non-coherent binary modulation). Then, based on the statistically derived results, we evaluate the outage probability of a selected beam, the average spectral efficiency (ASE), the average number of selected beams (ANSB) and the average bit error rate (BER). Selected results show that we can obtain higher spectral efficiency and simultaneously reduce the potential for increasing the complexity of implementation caused by applying the selection-based beam selection scheme without considerable performance loss. Especially for the AM case, the ASE can be increased further compared to the non- AM cases. Our derived results based on the Gamma distribution as an approximation of the Gamma-Gamma distribution can be used as approximated performance measure bounds, especially, they may lead to lower bounds on the approximated considered performance measures.

  9. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  10. A novel approach to analyzing fMRI and SNP data via parallel independent component analysis

    Science.gov (United States)

    Liu, Jingyu; Pearlson, Godfrey; Calhoun, Vince; Windemuth, Andreas

    2007-03-01

    There is current interest in understanding genetic influences on brain function in both the healthy and the disordered brain. Parallel independent component analysis, a new method for analyzing multimodal data, is proposed in this paper and applied to functional magnetic resonance imaging (fMRI) and a single nucleotide polymorphism (SNP) array. The method aims to identify the independent components of each modality and the relationship between the two modalities. We analyzed 92 participants, including 29 schizophrenia (SZ) patients, 13 unaffected SZ relatives, and 50 healthy controls. We found a correlation of 0.79 between one fMRI component and one SNP component. The fMRI component consists of activations in cingulate gyrus, multiple frontal gyri, and superior temporal gyrus. The related SNP component is contributed to significantly by 9 SNPs located in sets of genes, including those coding for apolipoprotein A-I, and C-III, malate dehydrogenase 1 and the gamma-aminobutyric acid alpha-2 receptor. A significant difference in the presences of this SNP component is found between the SZ group (SZ patients and their relatives) and the control group. In summary, we constructed a framework to identify the interactions between brain functional and genetic information; our findings provide new insight into understanding genetic influences on brain function in a common mental disorder.

  11. Convergence analysis of a class of massively parallel direction splitting algorithms for the Navier-Stokes equations in simple domains

    KAUST Repository

    Guermond, Jean-Luc; Minev, Peter D.; Salgado, Abner J.

    2012-01-01

    We provide a convergence analysis for a new fractional timestepping technique for the incompressible Navier-Stokes equations based on direction splitting. This new technique is of linear complexity, unconditionally stable and convergent, and suitable for massive parallelization. © 2012 American Mathematical Society.

  12. A time-variant analysis of the 1/f^(2) phase noise in CMOS parallel LC-Tank quadrature oscillators

    DEFF Research Database (Denmark)

    Andreani, Pietro

    2006-01-01

    This paper presents a study of 1/f2 phase noise in quadrature oscillators built by connecting two differential LC-tank oscillators in a parallel fashion. The analysis clearly demonstrates the necessity of adopting a time-variant theory of phase noise, where a more simplistic, time...

  13. Performance Analysis of a Threshold-Based Parallel Multiple Beam Selection Scheme for WDM FSO Systems

    KAUST Repository

    Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai

    2018-01-01

    In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme for a free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has occurred

  14. Analysis of the contribution of sedimentation to bacterial mass transport in a parallel plate flow chamber

    NARCIS (Netherlands)

    Li, Jiuyi; Busscher, Henk J.; Norde, Willem; Sjollema, Jelmer

    2011-01-01

    In order to investigate bacterium-substratum interactions, understanding of bacterial mass transport is necessary. Comparisons of experimentally observed initial deposition rates with mass transport rates in parallel-plate-flow-chambers (PPFC) predicted by convective-diffusion yielded deposition

  15. SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices

    Science.gov (United States)

    Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto

    2017-08-01

    Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.

  16. Accuracy analysis of hybrid parallel robot for the assembling of ITER

    International Nuclear Information System (INIS)

    Wang Yongbo; Pessi, Pekka; Wu Huapeng; Handroos, Heikki

    2009-01-01

    This paper presents a novel mobile parallel robot, which is able to carry welding and machining processes from inside the international thermonuclear experimental reactor (ITER) vacuum vessel (VV). The kinematics design of the robot has been optimized for ITER access. To improve the accuracy of the parallel robot, the errors caused by the stiffness and manufacture process have to be compensated or limited to a minimum value. In this paper kinematics errors and stiffness modeling are given. The simulation results are presented.

  17. Accuracy analysis of hybrid parallel robot for the assembling of ITER

    Energy Technology Data Exchange (ETDEWEB)

    Wang Yongbo [Institute of Mechatronics and Virtual Engineering, Lappeenranta University of Technology, Skinnarilankatu 34, 53850 Lappeenranta (Finland); The State Key Laboratory of Mechanical Transmission, Chongqing University (China); Pessi, Pekka [Institute of Mechatronics and Virtual Engineering, Lappeenranta University of Technology, Skinnarilankatu 34, 53850 Lappeenranta (Finland); Wu Huapeng [Institute of Mechatronics and Virtual Engineering, Lappeenranta University of Technology, Skinnarilankatu 34, 53850 Lappeenranta (Finland)], E-mail: huapeng@lut.fi; Handroos, Heikki [Institute of Mechatronics and Virtual Engineering, Lappeenranta University of Technology, Skinnarilankatu 34, 53850 Lappeenranta (Finland)

    2009-06-15

    This paper presents a novel mobile parallel robot, which is able to carry welding and machining processes from inside the international thermonuclear experimental reactor (ITER) vacuum vessel (VV). The kinematics design of the robot has been optimized for ITER access. To improve the accuracy of the parallel robot, the errors caused by the stiffness and manufacture process have to be compensated or limited to a minimum value. In this paper kinematics errors and stiffness modeling are given. The simulation results are presented.

  18. Analysis of parallel optical sampling rate and ADC requirements in digital coherent receivers

    DEFF Research Database (Denmark)

    Lorences Riesgo, Abel; Galili, Michael; Peucheret, Christophe

    2012-01-01

    We comprehensively assess analog-to-digital converter requirements in coherent digital receiver schemes with parallel optical sampling. We determine the electronic requirements in accordance with the properties of the free running local oscillator.......We comprehensively assess analog-to-digital converter requirements in coherent digital receiver schemes with parallel optical sampling. We determine the electronic requirements in accordance with the properties of the free running local oscillator....

  19. Stiffness Analysis and Comparison of 3-PPR Planar Parallel Manipulators with Actuation Compliance

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Kepler, Jørgen Asbøl

    2012-01-01

    In this paper, the stiffness of 3-PPR planar parallel manipulator (PPM) is analyzed with the consideration of nonlinear actuation compliance. The characteristics of the stiffness matrix pertaining to the planar parallel manipulators are analyzed and discussed. Graphic representation of the stiffn...... of the stiffness characteristics by means of translational and rotational stiffness mapping is developed. The developed method is illustrated with an unsymmetrical 3-PPR PPM, being compared with its structure-symmetrical counterpart....

  20. A factor analysis to detect factors influencing building national brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.

  1. Analysis of flow distribution instability in parallel thin rectangular multi-channel system

    Energy Technology Data Exchange (ETDEWEB)

    Xia, G.L. [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an City 710049 (China); Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, Harbin Engineering University, Harbin City 150001 (China); Su, G.H., E-mail: ghsu@mail.xjtu.edu.cn [School of Nuclear Science and Technology, Xi’an Jiaotong University, Xi’an City 710049 (China); Peng, M.J. [Fundamental Science on Nuclear Safety and Simulation Technology Laboratory, Harbin Engineering University, Harbin City 150001 (China)

    2016-08-15

    Highlights: • Flow distribution instability in parallel thin rectangular multi-channel system is studied using RELAP5 codes. • Flow excursion may bring parallel heating channel into the density wave oscillations region. • Flow distribution instability is more likely to happen at low power/flow ratio conditions. • The increase of channel number will not affect the flow distribution instability boundary. • Asymmetry inlet throttling and heating will make system more unstable. - Abstract: The flow distribution instability in parallel thin rectangular multi-channel system has been researched in the present study. The research model of parallel channel system is established by using RELAP5/MOD3.4 codes. The transient process of flow distribution instability is studied at imposed inlet mass flow rate and imposed pressure drop conditions. The influence of heating power, mass flow rate, system pressure and channel number on flow distribution instability are analyzed. Furthermore, the flow distribution instability of parallel two-channel system under asymmetric inlet throttling and heating power is studied. The results show that, if multi-channel system operates at the negative slope region of channel ΔP–G curve, small disturbance in pressure drop will lead to flow redistribution between parallel channels. Flow excursion may bring the operating point of heating channel into the density-wave oscillations region, this will result in out-phase or in-phase flow oscillations. Flow distribution instability is more likely to happen at low power/flow ratio conditions, the stability of parallel channel system increases with system pressure, the channel number has a little effect on system stability, but the asymmetry inlet throttling or heating power will make the system more unstable.

  2. Analysis of passive scalar advection in parallel shear flows: Sorting of modes at intermediate time scales

    Science.gov (United States)

    Camassa, Roberto; McLaughlin, Richard M.; Viotti, Claudio

    2010-11-01

    The time evolution of a passive scalar advected by parallel shear flows is studied for a class of rapidly varying initial data. Such situations are of practical importance in a wide range of applications from microfluidics to geophysics. In these contexts, it is well-known that the long-time evolution of the tracer concentration is governed by Taylor's asymptotic theory of dispersion. In contrast, we focus here on the evolution of the tracer at intermediate time scales. We show how intermediate regimes can be identified before Taylor's, and in particular, how the Taylor regime can be delayed indefinitely by properly manufactured initial data. A complete characterization of the sorting of these time scales and their associated spatial structures is presented. These analytical predictions are compared with highly resolved numerical simulations. Specifically, this comparison is carried out for the case of periodic variations in the streamwise direction on the short scale with envelope modulations on the long scales, and show how this structure can lead to "anomalously" diffusive transients in the evolution of the scalar onto the ultimate regime governed by Taylor dispersion. Mathematically, the occurrence of these transients can be viewed as a competition in the asymptotic dominance between large Péclet (Pe) numbers and the long/short scale aspect ratios (LVel/LTracer≡k), two independent nondimensional parameters of the problem. We provide analytical predictions of the associated time scales by a modal analysis of the eigenvalue problem arising in the separation of variables of the governing advection-diffusion equation. The anomalous time scale in the asymptotic limit of large k Pe is derived for the short scale periodic structure of the scalar's initial data, for both exactly solvable cases and in general with WKBJ analysis. In particular, the exactly solvable sawtooth flow is especially important in that it provides a short cut to the exact solution to the

  3. PAGANI Toolkit: Parallel graph-theoretical analysis package for brain network big data.

    Science.gov (United States)

    Du, Haixiao; Xia, Mingrui; Zhao, Kang; Liao, Xuhong; Yang, Huazhong; Wang, Yu; He, Yong

    2018-05-01

    The recent collection of unprecedented quantities of neuroimaging data with high spatial resolution has led to brain network big data. However, a toolkit for fast and scalable computational solutions is still lacking. Here, we developed the PArallel Graph-theoretical ANalysIs (PAGANI) Toolkit based on a hybrid central processing unit-graphics processing unit (CPU-GPU) framework with a graphical user interface to facilitate the mapping and characterization of high-resolution brain networks. Specifically, the toolkit provides flexible parameters for users to customize computations of graph metrics in brain network analyses. As an empirical example, the PAGANI Toolkit was applied to individual voxel-based brain networks with ∼200,000 nodes that were derived from a resting-state fMRI dataset of 624 healthy young adults from the Human Connectome Project. Using a personal computer, this toolbox completed all computations in ∼27 h for one subject, which is markedly less than the 118 h required with a single-thread implementation. The voxel-based functional brain networks exhibited prominent small-world characteristics and densely connected hubs, which were mainly located in the medial and lateral fronto-parietal cortices. Moreover, the female group had significantly higher modularity and nodal betweenness centrality mainly in the medial/lateral fronto-parietal and occipital cortices than the male group. Significant correlations between the intelligence quotient and nodal metrics were also observed in several frontal regions. Collectively, the PAGANI Toolkit shows high computational performance and good scalability for analyzing connectome big data and provides a friendly interface without the complicated configuration of computing environments, thereby facilitating high-resolution connectomics research in health and disease. © 2018 Wiley Periodicals, Inc.

  4. Genotypic tropism testing by massively parallel sequencing: qualitative and quantitative analysis

    Directory of Open Access Journals (Sweden)

    Thiele Bernhard

    2011-05-01

    Full Text Available Abstract Background Inferring viral tropism from genotype is a fast and inexpensive alternative to phenotypic testing. While being highly predictive when performed on clonal samples, sensitivity of predicting CXCR4-using (X4 variants drops substantially in clinical isolates. This is mainly attributed to minor variants not detected by standard bulk-sequencing. Massively parallel sequencing (MPS detects single clones thereby being much more sensitive. Using this technology we wanted to improve genotypic prediction of coreceptor usage. Methods Plasma samples from 55 antiretroviral-treated patients tested for coreceptor usage with the Monogram Trofile Assay were sequenced with standard population-based approaches. Fourteen of these samples were selected for further analysis with MPS. Tropism was predicted from each sequence with geno2pheno[coreceptor]. Results Prediction based on bulk-sequencing yielded 59.1% sensitivity and 90.9% specificity compared to the trofile assay. With MPS, 7600 reads were generated on average per isolate. Minorities of sequences with high confidence in CXCR4-usage were found in all samples, irrespective of phenotype. When using the default false-positive-rate of geno2pheno[coreceptor] (10%, and defining a minority cutoff of 5%, the results were concordant in all but one isolate. Conclusions The combination of MPS and coreceptor usage prediction results in a fast and accurate alternative to phenotypic assays. The detection of X4-viruses in all isolates suggests that coreceptor usage as well as fitness of minorities is important for therapy outcome. The high sensitivity of this technology in combination with a quantitative description of the viral population may allow implementing meaningful cutoffs for predicting response to CCR5-antagonists in the presence of X4-minorities.

  5. Genotypic tropism testing by massively parallel sequencing: qualitative and quantitative analysis.

    Science.gov (United States)

    Däumer, Martin; Kaiser, Rolf; Klein, Rolf; Lengauer, Thomas; Thiele, Bernhard; Thielen, Alexander

    2011-05-13

    Inferring viral tropism from genotype is a fast and inexpensive alternative to phenotypic testing. While being highly predictive when performed on clonal samples, sensitivity of predicting CXCR4-using (X4) variants drops substantially in clinical isolates. This is mainly attributed to minor variants not detected by standard bulk-sequencing. Massively parallel sequencing (MPS) detects single clones thereby being much more sensitive. Using this technology we wanted to improve genotypic prediction of coreceptor usage. Plasma samples from 55 antiretroviral-treated patients tested for coreceptor usage with the Monogram Trofile Assay were sequenced with standard population-based approaches. Fourteen of these samples were selected for further analysis with MPS. Tropism was predicted from each sequence with geno2pheno[coreceptor]. Prediction based on bulk-sequencing yielded 59.1% sensitivity and 90.9% specificity compared to the trofile assay. With MPS, 7600 reads were generated on average per isolate. Minorities of sequences with high confidence in CXCR4-usage were found in all samples, irrespective of phenotype. When using the default false-positive-rate of geno2pheno[coreceptor] (10%), and defining a minority cutoff of 5%, the results were concordant in all but one isolate. The combination of MPS and coreceptor usage prediction results in a fast and accurate alternative to phenotypic assays. The detection of X4-viruses in all isolates suggests that coreceptor usage as well as fitness of minorities is important for therapy outcome. The high sensitivity of this technology in combination with a quantitative description of the viral population may allow implementing meaningful cutoffs for predicting response to CCR5-antagonists in the presence of X4-minorities.

  6. A New XYZ Compliant Parallel Mechanism for Micro-/Nano-Manipulation: Design and Analysis

    Directory of Open Access Journals (Sweden)

    Haiyang Li

    2016-02-01

    Full Text Available Based on the constraint and position identification (CPI approach for synthesizing XYZ compliant parallel mechanisms (CPMs and configuration modifications, this paper proposes a new fully-symmetrical XYZ CPM with desired motion characteristics such as reduced cross-axis coupling, minimized lost motion, and relatively small parasitic motion. The good motion characteristics arise from not only its symmetric configuration, but also the rigid linkages between non-adjacent rigid stages. Comprehensive kinematic analysis is carried out based on a series of finite element simulations over a motion range per axis less than ±5% of the beam length, which reveals that the maximum cross-axis coupling rate is less than 0.86%, the maximum lost motion rate is less than 1.20%, the parasitic rotations of the motion stage (MS are in the order of 10−5 rad, and the parasitic translations of the three actuated stages (ASs are in the order of 10−4 of the beam length (less than 0.3% of the motion range, where the beam slenderness ratio is larger than 20. Furthermore, the nonlinear analytical models of the primary translations of the XYZ CPM, including the primary translations of the MS and the ASs, are derived and validated to provide a quick design synthesis. Moreover, two practical design schemes of the proposed XYZ CPM are discussed with consideration of the manufacturability. The practical designs enable the XYZ CPM to be employed in many applications such as micro-/nano-positioning, micro-/nano-manufacturing and micro-/nano-assembly. Finally, a spatial high-precision translational system is presented based on the practical design schemes, taking the actuator and sensor integration into account.

  7. Parallel multispot smFRET analysis using an 8-pixel SPAD array

    Science.gov (United States)

    Ingargiola, A.; Colyer, R. A.; Kim, D.; Panzeri, F.; Lin, R.; Gulinatti, A.; Rech, I.; Ghioni, M.; Weiss, S.; Michalet, X.

    2012-02-01

    Single-molecule Förster resonance energy transfer (smFRET) is a powerful tool for extracting distance information between two fluorophores (a donor and acceptor dye) on a nanometer scale. This method is commonly used to monitor binding interactions or intra- and intermolecular conformations in biomolecules freely diffusing through a focal volume or immobilized on a surface. The diffusing geometry has the advantage to not interfere with the molecules and to give access to fast time scales. However, separating photon bursts from individual molecules requires low sample concentrations. This results in long acquisition time (several minutes to an hour) to obtain sufficient statistics. It also prevents studying dynamic phenomena happening on time scales larger than the burst duration and smaller than the acquisition time. Parallelization of acquisition overcomes this limit by increasing the acquisition rate using the same low concentrations required for individual molecule burst identification. In this work we present a new two-color smFRET approach using multispot excitation and detection. The donor excitation pattern is composed of 4 spots arranged in a linear pattern. The fluorescent emission of donor and acceptor dyes is then collected and refocused on two separate areas of a custom 8-pixel SPAD array. We report smFRET measurements performed on various DNA samples synthesized with various distances between the donor and acceptor fluorophores. We demonstrate that our approach provides identical FRET efficiency values to a conventional single-spot acquisition approach, but with a reduced acquisition time. Our work thus opens the way to high-throughput smFRET analysis on freely diffusing molecules.

  8. The Infinitesimal Jackknife with Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  9. Comparative Analysis of Torque and Acceleration of Pre- and Post-Transmission Parallel Hybrid Drivetrains

    Directory of Open Access Journals (Sweden)

    Zulkifli Saiful A.

    2016-01-01

    Full Text Available Parallel hybrid electric vehicles (HEV can be classified according to the location of the electric motor with respect to the transmission unit for the internal combustion engine (ICE: they can be pre-transmission or posttransmission parallel hybrid. A split-axle parallel HEV – in which the ICE and electric motor provide propulsion power to different axles – is a sub-type of the post-transmission hybrid, since addition of torque and power from the two power sources occurs after the vehicle’s transmission. The term ‘through-the-road’ (TTR hybrid is also used for the split-parallel HEV, since power coupling between the ICE and electric motor is not through some mechanical device but through the vehicle itself, its wheels and the road on which it moves. The present work presents torquespeed relationship of the split-parallel hybrid and analyses simulation results of torque profiles and acceleration performance of pre-transmission and post-transmission hybrid configurations, using three different sizes of electric motor. Different operating regions of the pre-trans and post-trans motors are observed, leading to different speed and torque profiles. Although ICE average efficiency in the post-trans hybrid is slightly lower than in the pre-trans hybrid, the post-trans hybrid vehicle has better fuel economy and acceleration performance than the pre-trans hybrid vehicle.

  10. PARALLEL ADAPTIVE MULTILEVEL SAMPLING ALGORITHMS FOR THE BAYESIAN ANALYSIS OF MATHEMATICAL MODELS

    KAUST Repository

    Prudencio, Ernesto; Cheung, Sai Hung

    2012-01-01

    In recent years, Bayesian model updating techniques based on measured data have been applied to many engineering and applied science problems. At the same time, parallel computational platforms are becoming increasingly more powerful and are being used more frequently by the engineering and scientific communities. Bayesian techniques usually require the evaluation of multi-dimensional integrals related to the posterior probability density function (PDF) of uncertain model parameters. The fact that such integrals cannot be computed analytically motivates the research of stochastic simulation methods for sampling posterior PDFs. One such algorithm is the adaptive multilevel stochastic simulation algorithm (AMSSA). In this paper we discuss the parallelization of AMSSA, formulating the necessary load balancing step as a binary integer programming problem. We present a variety of results showing the effectiveness of load balancing on the overall performance of AMSSA in a parallel computational environment.

  11. Sequential combination of k-t principle component analysis (PCA) and partial parallel imaging: k-t PCA GROWL.

    Science.gov (United States)

    Qi, Haikun; Huang, Feng; Zhou, Hongmei; Chen, Huijun

    2017-03-01

    k-t principle component analysis (k-t PCA) is a distinguished method for high spatiotemporal resolution dynamic MRI. To further improve the accuracy of k-t PCA, a combination with partial parallel imaging (PPI), k-t PCA/SENSE, has been tested. However, k-t PCA/SENSE suffers from long reconstruction time and limited improvement. This study aims to improve the combination of k-t PCA and PPI on both reconstruction speed and accuracy. A sequential combination scheme called k-t PCA GROWL (GRAPPA operator for wider readout line) was proposed. The GRAPPA operator was performed before k-t PCA to extend each readout line into a wider band, which improved the condition of the encoding matrix in the following k-t PCA reconstruction. k-t PCA GROWL was tested and compared with k-t PCA and k-t PCA/SENSE on cardiac imaging. k-t PCA GROWL consistently resulted in better image quality compared with k-t PCA/SENSE at high acceleration factors for both retrospectively and prospectively undersampled cardiac imaging, with a much lower computation cost. The improvement in image quality became greater with the increase of acceleration factor. By sequentially combining the GRAPPA operator and k-t PCA, the proposed k-t PCA GROWL method outperformed k-t PCA/SENSE in both reconstruction speed and accuracy, suggesting that k-t PCA GROWL is a better combination scheme than k-t PCA/SENSE. Magn Reson Med 77:1058-1067, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. Large-Scale Parallel Finite Element Analysis of the Stress Singular Problems

    International Nuclear Information System (INIS)

    Noriyuki Kushida; Hiroshi Okuda; Genki Yagawa

    2002-01-01

    In this paper, the convergence behavior of large-scale parallel finite element method for the stress singular problems was investigated. The convergence behavior of iterative solvers depends on the efficiency of the pre-conditioners. However, efficiency of pre-conditioners may be influenced by the domain decomposition that is necessary for parallel FEM. In this study the following results were obtained: Conjugate gradient method without preconditioning and the diagonal scaling preconditioned conjugate gradient method were not influenced by the domain decomposition as expected. symmetric successive over relaxation method preconditioned conjugate gradient method converged 6% faster as maximum if the stress singular area was contained in one sub-domain. (authors)

  13. Quantitative and Selective Analysis of Feline Growth Related Proteins Using Parallel Reaction Monitoring High Resolution Mass Spectrometry.

    Directory of Open Access Journals (Sweden)

    Mårten Sundberg

    Full Text Available Today immunoassays are widely used in veterinary medicine, but lack of species specific assays often necessitates the use of assays developed for human applications. Mass spectrometry (MS is an attractive alternative due to high specificity and versatility, allowing for species-independent analysis. Targeted MS-based quantification methods are valuable complements to large scale shotgun analysis. A method referred to as parallel reaction monitoring (PRM, implemented on Orbitrap MS, has lately been presented as an excellent alternative to more traditional selected reaction monitoring/multiple reaction monitoring (SRM/MRM methods. The insulin-like growth factor (IGF-system is not well described in the cat but there are indications of important differences between cats and humans. In feline medicine IGF-I is mainly analyzed for diagnosis of growth hormone disorders but also for research, while the other proteins in the IGF-system are not routinely analyzed within clinical practice. Here, a PRM method for quantification of IGF-I, IGF-II, IGF binding protein (BP -3 and IGFBP-5 in feline serum is presented. Selective quantification was supported by the use of a newly launched internal standard named QPrEST™. Homology searches demonstrated the possibility to use this standard of human origin for quantification of the targeted feline proteins. Excellent quantitative sensitivity at the attomol/μL (pM level and selectivity were obtained. As the presented approach is very generic we show that high resolution mass spectrometry in combination with PRM and QPrEST™ internal standards is a versatile tool for protein quantitation across multispecies.

  14. Massively parallel signature sequencing and bioinformatics analysis identifies up-regulation of TGFBI and SOX4 in human glioblastoma.

    Directory of Open Access Journals (Sweden)

    Biaoyang Lin

    Full Text Available BACKGROUND: A comprehensive network-based understanding of molecular pathways abnormally altered in glioblastoma multiforme (GBM is essential for developing effective therapeutic approaches for this deadly disease. METHODOLOGY/PRINCIPAL FINDINGS: Applying a next generation sequencing technology, massively parallel signature sequencing (MPSS, we identified a total of 4535 genes that are differentially expressed between normal brain and GBM tissue. The expression changes of three up-regulated genes, CHI3L1, CHI3L2, and FOXM1, and two down-regulated genes, neurogranin and L1CAM, were confirmed by quantitative PCR. Pathway analysis revealed that TGF- beta pathway related genes were significantly up-regulated in GBM tumor samples. An integrative pathway analysis of the TGF beta signaling network identified two alternative TGF-beta signaling pathways mediated by SOX4 (sex determining region Y-box 4 and TGFBI (Transforming growth factor beta induced. Quantitative RT-PCR and immunohistochemistry staining demonstrated that SOX4 and TGFBI expression is elevated in GBM tissues compared with normal brain tissues at both the RNA and protein levels. In vitro functional studies confirmed that TGFBI and SOX4 expression is increased by TGF-beta stimulation and decreased by a specific inhibitor of TGF-beta receptor 1 kinase. CONCLUSIONS/SIGNIFICANCE: Our MPSS database for GBM and normal brain tissues provides a useful resource for the scientific community. The identification of non-SMAD mediated TGF-beta signaling pathways acting through SOX4 and TGFBI (GENE ID:7045 in GBM indicates that these alternative pathways should be considered, in addition to the canonical SMAD mediated pathway, in the development of new therapeutic strategies targeting TGF-beta signaling in GBM. Finally, the construction of an extended TGF-beta signaling network with overlaid gene expression changes between GBM and normal brain extends our understanding of the biology of GBM.

  15. Analysis, design, and experimental evaluation of power calculation in digital droop-controlled parallel microgrid inverters

    DEFF Research Database (Denmark)

    Gao, Ming-zhi; Chen, Min; Jin, Cheng

    2013-01-01

    Parallel operation of distributed generation is an important topic for microgrids, which can provide a highly reliable electric supply service and good power quality to end customers when the utility is unavailable. However, there is a well-known limitation: the power sharing accuracy between...

  16. Visual analysis of inter-process communication for large-scale parallel computing.

    Science.gov (United States)

    Muelder, Chris; Gygi, Francois; Ma, Kwan-Liu

    2009-01-01

    In serial computation, program profiling is often helpful for optimization of key sections of code. When moving to parallel computation, not only does the code execution need to be considered but also communication between the different processes which can induce delays that are detrimental to performance. As the number of processes increases, so does the impact of the communication delays on performance. For large-scale parallel applications, it is critical to understand how the communication impacts performance in order to make the code more efficient. There are several tools available for visualizing program execution and communications on parallel systems. These tools generally provide either views which statistically summarize the entire program execution or process-centric views. However, process-centric visualizations do not scale well as the number of processes gets very large. In particular, the most common representation of parallel processes is a Gantt char t with a row for each process. As the number of processes increases, these charts can become difficult to work with and can even exceed screen resolution. We propose a new visualization approach that affords more scalability and then demonstrate it on systems running with up to 16,384 processes.

  17. Performance analysis of parallel identical machines with a generalized shortest queue arrival mechanism

    NARCIS (Netherlands)

    van Houtum, Geert-Jan; Adan, I.J.B.F.; Wessels, J.; Zijm, Willem H.M.

    In this paper we study a production system consisting of a group of parallel machines producing multiple job types. Each machine has its own queue and it can process a restricted set of job types only. On arrival a job joins the shortest queue among all queues capable of serving that job. Under the

  18. Analysis of single blow effectiveness in non-uniform parallel plate regenerators

    DEFF Research Database (Denmark)

    Jensen, Jesper Buch; Bahl, Christian Robert Haffenden; Engelbrecht, Kurt

    2011-01-01

    Non-uniform distributions of plate spacings in parallel plate regenerators have been found to induce loss of performance. In this paper, it has been investigated how variations of three geometric parameters (the aspect ratio, the porosity, and the standard deviation of the plate spacing) affects...

  19. Par@Graph - a parallel toolbox for the construction and analysis of large complex climate networks

    NARCIS (Netherlands)

    Tantet, A.J.J.

    2015-01-01

    In this paper, we present Par@Graph, a software toolbox to reconstruct and analyze complex climate networks having a large number of nodes (up to at least 106) and edges (up to at least 1012). The key innovation is an efficient set of parallel software tools designed to leverage the inherited hybrid

  20. Design and implementation of parallel video encoding strategies using divisible load analysis

    NARCIS (Netherlands)

    Li, Ping; Veeravalli, Bharadwaj; Kassim, A.A.

    2005-01-01

    The processing time needed for motion estimation usually accounts for a significant part of the overall processing time of the video encoder. To improve the video encoding speed, reducing the execution time for motion estimation process is essential. Parallel implementation of video encoding systems

  1. Analysis of Serial and Parallel Algorithms for Use in Molecular Dynamics.. Review and Proposals

    Science.gov (United States)

    Mazzone, A. M.

    This work analyzes the stability and accuracy of multistep methods, either for serial or parallel calculations, applied to molecular dynamics simulations. Numerical testing is made by evaluating the equilibrium configurations of mono-elemental crystalline lattices of metallic and semiconducting type (Ag and Si, respectively) and of a cubic CuY compound.

  2. Analysis of clinical complication data for radiation hepatitis using a parallel architecture model

    International Nuclear Information System (INIS)

    Jackson, A.; Haken, R.K. ten; Robertson, J.M.; Kessler, M.L.; Kutcher, G.J.; Lawrence, T.S.

    1995-01-01

    Purpose: The detailed knowledge of dose volume distributions available from the three-dimensional (3D) conformal radiation treatment of tumors in the liver (reported elsewhere) offers new opportunities to quantify the effect of volume on the probability of producing radiation hepatitis. We aim to test a new parallel architecture model of normal tissue complication probability (NTCP) with these data. Methods and Materials: Complication data and dose volume histograms from a total of 93 patients with normal liver function, treated on a prospective protocol with 3D conformal radiation therapy and intraarterial hepatic fluorodeoxyuridine, were analyzed with a new parallel architecture model. Patient treatment fell into six categories differing in doses delivered and volumes irradiated. By modeling the radiosensitivity of liver subunits, we are able to use dose volume histograms to calculate the fraction of the liver damaged in each patient. A complication results if this fraction exceeds the patient's functional reserve. To determine the patient distribution of functional reserves and the subunit radiosensitivity, the maximum likelihood method was used to fit the observed complication data. Results: The parallel model fit the complication data well, although uncertainties on the functional reserve distribution and subunit radiosensitivy are highly correlated. Conclusion: The observed radiation hepatitis complications show a threshold effect that can be described well with a parallel architecture model. However, additional independent studies are required to better determine the parameters defining the functional reserve distribution and subunit radiosensitivity

  3. An analysis method for harmonic resonance and stability of multi-paralleled LCL-filtered inverters

    DEFF Research Database (Denmark)

    Lu, Minghui; Wang, Xiongfei; Blaabjerg, Frede

    2015-01-01

    Paralleled grid-connected inverters with LCL-filters are coupled through the non-negligible grid impedance. However, the coupling effects among inverters and grid are usually ignored during the design, which may lead to unexpected system resonance and even instability. This paper thus investigates...

  4. Analysis of Properties of Induction Machine with Combined Parallel Star-Delta Stator Winding

    Czech Academy of Sciences Publication Activity Database

    Schreier, Luděk; Bendl, Jiří; Chomát, Miroslav

    2017-01-01

    Roč. 113, č. 1 (2017), s. 147-153 ISSN 0239-3646 R&D Projects: GA ČR(CZ) GA16-07795S Institutional support: RVO:61388998 Keywords : induction machine * parallel combined stator winding Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering OBOR OECD: Electrical and electronic engineering

  5. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  6. A MapReduce-Based Parallel Frequent Pattern Growth Algorithm for Spatiotemporal Association Analysis of Mobile Trajectory Big Data

    Directory of Open Access Journals (Sweden)

    Dawen Xia

    2018-01-01

    Full Text Available Frequent pattern mining is an effective approach for spatiotemporal association analysis of mobile trajectory big data in data-driven intelligent transportation systems. While existing parallel algorithms have been successfully applied to frequent pattern mining of large-scale trajectory data, two major challenges are how to overcome the inherent defects of Hadoop to cope with taxi trajectory big data including massive small files and how to discover the implicitly spatiotemporal frequent patterns with MapReduce. To conquer these challenges, this paper presents a MapReduce-based Parallel Frequent Pattern growth (MR-PFP algorithm to analyze the spatiotemporal characteristics of taxi operating using large-scale taxi trajectories with massive small file processing strategies on a Hadoop platform. More specifically, we first implement three methods, that is, Hadoop Archives (HAR, CombineFileInputFormat (CFIF, and Sequence Files (SF, to overcome the existing defects of Hadoop and then propose two strategies based on their performance evaluations. Next, we incorporate SF into Frequent Pattern growth (FP-growth algorithm and then implement the optimized FP-growth algorithm on a MapReduce framework. Finally, we analyze the characteristics of taxi operating in both spatial and temporal dimensions by MR-PFP in parallel. The results demonstrate that MR-PFP is superior to existing Parallel FP-growth (PFP algorithm in efficiency and scalability.

  7. Conceptual design and kinematic analysis of a novel parallel robot for high-speed pick-and-place operations

    Science.gov (United States)

    Meng, Qizhi; Xie, Fugui; Liu, Xin-Jun

    2018-06-01

    This paper deals with the conceptual design, kinematic analysis and workspace identification of a novel four degrees-of-freedom (DOFs) high-speed spatial parallel robot for pick-and-place operations. The proposed spatial parallel robot consists of a base, four arms and a 1½ mobile platform. The mobile platform is a major innovation that avoids output singularity and offers the advantages of both single and double platforms. To investigate the characteristics of the robot's DOFs, a line graph method based on Grassmann line geometry is adopted in mobility analysis. In addition, the inverse kinematics is derived, and the constraint conditions to identify the correct solution are also provided. On the basis of the proposed concept, the workspace of the robot is identified using a set of presupposed parameters by taking input and output transmission index as the performance evaluation criteria.

  8. Analysis of Bernstein's factorization circuit

    NARCIS (Netherlands)

    Lenstra, A.K.; Shamir, A.; Tomlinson, J.; Tromer, E.; Zheng, Y.

    2002-01-01

    In [1], Bernstein proposed a circuit-based implementation of the matrix step of the number field sieve factorization algorithm. These circuits offer an asymptotic cost reduction under the measure "construction cost x run time". We evaluate the cost of these circuits, in agreement with [1], but argue

  9. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  10. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    Science.gov (United States)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  11. A Parallel Reaction Monitoring Mass Spectrometric Method for Analysis of Potential CSF Biomarkers for Alzheimer's Disease

    DEFF Research Database (Denmark)

    Brinkmalm, Gunnar; Sjödin, Simon; Simonsen, Anja Hviid

    2018-01-01

    SCOPE: The aim of this study was to develop and evaluate a parallel reaction monitoring mass spectrometry (PRM-MS) assay consisting of a panel of potential protein biomarkers in cerebrospinal fluid (CSF). EXPERIMENTAL DESIGN: Thirteen proteins were selected based on their association with neurode......SCOPE: The aim of this study was to develop and evaluate a parallel reaction monitoring mass spectrometry (PRM-MS) assay consisting of a panel of potential protein biomarkers in cerebrospinal fluid (CSF). EXPERIMENTAL DESIGN: Thirteen proteins were selected based on their association...... with neurodegenerative diseases and involvement in synaptic function, secretory vesicle function, or innate immune system. CSF samples were digested and two to three peptides per protein were quantified using stable isotope-labeled peptide standards. RESULTS: Coefficients of variation were generally below 15%. Clinical...

  12. Dynamic Analysis and Vibration Attenuation of Cable-Driven Parallel Manipulators for Large Workspace Applications

    Directory of Open Access Journals (Sweden)

    Jingli Du

    2013-01-01

    Full Text Available Cable-driven parallel manipulators are one of the best solutions to achieving large workspace since flexible cables can be easily stored on reels. However, due to the negligible flexural stiffness of cables, long cables will unavoidably vibrate during operation for large workspace applications. In this paper a finite element model for cable-driven parallel manipulators is proposed to mimic small amplitude vibration of cables around their desired position. Output feedback of the cable tension variation at the end of the end-effector is utilized to design the vibration attenuation controller which aims at attenuating the vibration of cables by slightly varying the cable length, thus decreasing its effect on the end-effector. When cable vibration is attenuated, motion controller could be designed for implementing precise large motion to track given trajectories. A numerical example is presented to demonstrate the dynamic model and the control algorithm.

  13. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    Science.gov (United States)

    2008-07-07

    analyzing multivariate data sets. The system was developed using the Java Development Kit (JDK) version 1.5; and it yields interactive performance on a... script and captures output from the MATLAB’s “regress” and “stepwisefit” utilities that perform simple and stepwise regression, respectively. The MATLAB...Statistical Association, vol. 85, no. 411, pp. 664–675, 1990. [9] H. Hauser, F. Ledermann, and H. Doleisch, “ Angular brushing of extended parallel coordinates

  14. Heat transfer analysis of GO-water nanofluid flow between two parallel disks

    Directory of Open Access Journals (Sweden)

    M. Azimi

    2015-03-01

    Full Text Available In this paper, the unsteady magnetohydrodynamic (MHD squeezing flow between two parallel disks (which is filled with nanofluid is considered. The Galerkin optimal homotopy asymptotic method (GOHAM is used to obtain the solution of the governing equations. The effects of Hartman number, nanoparticle volume fraction, Brownian motion parameter and suction/blowing parameter on nanofluid concentration, temperature and velocity profiles have been discussed. Furthermore, a comparison between obtained solutions and numerical ones have been provided.

  15. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Moreland, Kenneth [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipeline model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.

  16. Running accuracy analysis of a 3-RRR parallel kinematic machine considering the deformations of the links

    Science.gov (United States)

    Wang, Liping; Jiang, Yao; Li, Tiemin

    2014-09-01

    Parallel kinematic machines have drawn considerable attention and have been widely used in some special fields. However, high precision is still one of the challenges when they are used for advanced machine tools. One of the main reasons is that the kinematic chains of parallel kinematic machines are composed of elongated links that can easily suffer deformations, especially at high speeds and under heavy loads. A 3-RRR parallel kinematic machine is taken as a study object for investigating its accuracy with the consideration of the deformations of its links during the motion process. Based on the dynamic model constructed by the Newton-Euler method, all the inertia loads and constraint forces of the links are computed and their deformations are derived. Then the kinematic errors of the machine are derived with the consideration of the deformations of the links. Through further derivation, the accuracy of the machine is given in a simple explicit expression, which will be helpful to increase the calculating speed. The accuracy of this machine when following a selected circle path is simulated. The influences of magnitude of the maximum acceleration and external loads on the running accuracy of the machine are investigated. The results show that the external loads will deteriorate the accuracy of the machine tremendously when their direction coincides with the direction of the worst stiffness of the machine. The proposed method provides a solution for predicting the running accuracy of the parallel kinematic machines and can also be used in their design optimization as well as selection of suitable running parameters.

  17. Comparative analysis of serial and parallel laser patterning of Ag nanowire thin films

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Harim; Lee, Myeongkyu, E-mail: myeong@yonsei.ac.kr

    2017-03-31

    Highlights: • Serial and parallel laser patterning of Ag nanowire thin films is comparatively analyzed. • AgNW film can be directly patterned by a spatially-modulated pulsed Nd:YAG laser beam. • An area of 2.24 cm{sup 2} can be simultaneously patterned by a single pulse with energy of 350 mJ. - Abstract: Ag nanowire (AgNW) films solution-coated on a glass substrate were laser-patterned in two different ways. For the conventional serial process, a pulsed ultraviolet laser of 30 kHz repetition rate and ∼20 ns pulse width was employed as the laser source. For parallel patterning, the film was directly irradiated by a spatially-modulated Nd:YAG laser beam that has a low repetition rate of 10 kHz and a shorter pulse width of 5 ns. While multiple pulses with energy density ranging from 3 to 9 J/cm{sup 2} were required to pattern the film in the serial process, a single pulse with energy density of 0.16 J/cm{sup 2} completely removed AgNWs in the parallel patterning. This may be explained by the difference in patterning mechanism. In the parallel process using short pulses of 5 ns width, AgNWs can be removed in their solid state by the laser-induced thermo-elastic force, while they should be evaporated in the serial process utilizing a high-repetition rate laser. Important process parameters such as threshold energy density, speed, and available feature sizes are comparatively discussed for the two patterning.

  18. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-09-30

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  19. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    International Nuclear Information System (INIS)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-01-01

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  20. Heat transfer and flow analysis of nanofluid flow between parallel plates in presence of variable magnetic field using HPM

    Energy Technology Data Exchange (ETDEWEB)

    Hatami, M., E-mail: m.hatami@tue.nl [Esfarayen University of Technology, Mechanical Engineering Department, Esfarayen, North Khorasan (Iran, Islamic Republic of); Jing, Dengwei; Song, Dongxing [International Research Center for Renewable Energy, State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, Xi' an 710049 (China); Sheikholeslami, M.; Ganji, D.D. [Department of Mechanical Engineering, Babol University of Technology, Babol (Iran, Islamic Republic of)

    2015-12-15

    In this study, effect of variable magnetic field on nanofluid flow and heat transfer analysis between two parallel disks is investigated. By using the appropriate transformation for the velocity, temperature and concentration, the basic equations governing the flow, heat and mass transfer were reduced to a set of ordinary differential equations. These equations subjected to the associated boundary conditions were solved analytically using Homotopy perturbation method. The analytical investigation is carried out for different governing parameters namely: squeeze number, suction parameter, Hartmann number, Brownian motion parameter, thermophrotic parameter and Lewis number. Results show that Nusselt number has direct relationship with Brownian motion parameter and thermophrotic parameter but it is a decreasing function of squeeze number, suction parameter, Hartmann number and Lewis number. - Highlights: • Heat and mass transfer of nanofluids between parallel plates investigated. • A variable magnetic field is applied on the plates. • Governing equations are solved analytically. • Effects of physical parameters are discussed on the Nusselt number.

  1. Scattering Analysis of a Compact Dipole Array with Series and Parallel Feed Network including Mutual Coupling Effect

    Directory of Open Access Journals (Sweden)

    H. L. Sneha

    2013-01-01

    Full Text Available The current focus in defense arena is towards the stealth technology with an emphasis to control the radar cross-section (RCS. The scattering from the antennas mounted over the platform is of prime importance especially for a low-observable aerospace vehicle. This paper presents the analysis of the scattering cross section of a uniformly spaced linear dipole array. Two types of feed networks, that is, series and parallel feed networks, are considered. The total RCS of phased array with either kind of feed network is obtained by following the signal as it enters through the aperture and travels through the feed network. The RCS estimation of array is done including the mutual coupling effect between the dipole elements in three configurations, that is, side-by-side, collinear, and parallel-in-echelon. The results presented can be useful while designing a phased array with optimum performance towards low observability.

  2. Heat transfer and flow analysis of nanofluid flow between parallel plates in presence of variable magnetic field using HPM

    International Nuclear Information System (INIS)

    Hatami, M.; Jing, Dengwei; Song, Dongxing; Sheikholeslami, M.; Ganji, D.D.

    2015-01-01

    In this study, effect of variable magnetic field on nanofluid flow and heat transfer analysis between two parallel disks is investigated. By using the appropriate transformation for the velocity, temperature and concentration, the basic equations governing the flow, heat and mass transfer were reduced to a set of ordinary differential equations. These equations subjected to the associated boundary conditions were solved analytically using Homotopy perturbation method. The analytical investigation is carried out for different governing parameters namely: squeeze number, suction parameter, Hartmann number, Brownian motion parameter, thermophrotic parameter and Lewis number. Results show that Nusselt number has direct relationship with Brownian motion parameter and thermophrotic parameter but it is a decreasing function of squeeze number, suction parameter, Hartmann number and Lewis number. - Highlights: • Heat and mass transfer of nanofluids between parallel plates investigated. • A variable magnetic field is applied on the plates. • Governing equations are solved analytically. • Effects of physical parameters are discussed on the Nusselt number

  3. Analysis of parameters for technological equipment of parallel kinematics based on rods of variable length for processing accuracy assurance

    Science.gov (United States)

    Koltsov, A. G.; Shamutdinov, A. H.; Blokhin, D. A.; Krivonos, E. V.

    2018-01-01

    A new classification of parallel kinematics mechanisms on symmetry coefficient, being proportional to mechanism stiffness and accuracy of the processing product using the technological equipment under study, is proposed. A new version of the Stewart platform with a high symmetry coefficient is presented for analysis. The workspace of the mechanism under study is described, this space being a complex solid figure. The workspace end points are reached by the center of the mobile platform which moves in parallel related to the base plate. Parameters affecting the processing accuracy, namely the static and dynamic stiffness, natural vibration frequencies are determined. The capability assessment of the mechanism operation under various loads, taking into account resonance phenomena at different points of the workspace, was conducted. The study proved that stiffness and therefore, processing accuracy with the use of the above mentioned mechanisms are comparable with the stiffness and accuracy of medium-sized series-produced machines.

  4. Multiple factor analysis by example using R

    CERN Document Server

    Pagès, Jérôme

    2014-01-01

    Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The

  5. An efficient, interactive, and parallel system for biomedical volume analysis on a standard workstation

    International Nuclear Information System (INIS)

    Rebuffel, V.; Gonon, G.

    1992-01-01

    A software package is presented that can be employed for any 3D imaging modalities: X-ray tomography, emission tomography, magnetic resonance imaging. This system uses a hierarchical data structure, named Octree, that naturally allows a multi-resolution approach. The well-known problems of such an indeterministic representation, especially the neighbor finding, has been solved. Several algorithms of volume processing have been developed, using these techniques and an optimal data storage for the Octree. A parallel implementation was chosen that is compatible with the constraints of the Octree base and the various algorithms. (authors) 4 refs., 3 figs., 1 tab

  6. Analysis of thermal dispersion in an array of parallel plates with fully-developed laminar flow

    International Nuclear Information System (INIS)

    Xu Jiaying; Lu Tianjian; Hodson, Howard P.; Fleck, Norman A.

    2010-01-01

    The effect of thermal dispersion upon heat transfer across a periodic array of parallel plates is studied. Three basic heat transfer problems are addressed, each for steady, fully-developed, laminar fluid flow: (a) transient heat transfer due to an arbitrary initial temperature distribution within the fluid, (b) steady heat transfer with constant heat flux on all plate surfaces, and (c) steady heat transfer with constant wall temperatures. For problems (a) and (b), the effective thermal dispersivity scales with the Peclet number Pe according to 1 + CPe 2 , where the coefficient C is independent of Pe. For problem (c) the coefficient C is a function of Pe.

  7. A numerical analysis of a reciprocating Active Magnetic Regenerator with a parallel-plate regenerator geometry

    DEFF Research Database (Denmark)

    Petersen, Thomas Frank; Pryds, Nini; Smith, Anders

    2007-01-01

    We have developed a two-dimensional model of a reciprocating Active Magnetic Regenerator(AMR) with a regenerator made of parallel plates arranged in a stack configuration. The time dependent,two-dimensional model solves the Navier-Stokes equations for the heat transfer fluid and the coupled heat...... transfer equations for the regenerator and the fluid. The model is implemented using the Finite Element Method. The model can be used to study both transient and steady-state phenomena in the AMR for any ratio of regenerator to fluid heat capacity. Results on the AMR performance for different design...

  8. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  9. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    Science.gov (United States)

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  10. Experimental analysis of a capillary pumped loop for terrestrial applications with several evaporators in parallel

    International Nuclear Information System (INIS)

    Blet, Nicolas; Bertin, Yves; Ayel, Vincent; Romestant, Cyril; Platel, Vincent

    2016-01-01

    Highlights: • This paper introduces experimental studies of a CPLTA with 3 evaporators in parallel. • Operating principles of mono-evaporator CPLTA are reminded. • A reference test with the new bench with only one evaporator is introduced. • Global behavior of the multi-evaporators loop is presented and discussed. • Some additional thermohydraulic couplings are revealed. - Abstract: In the context of high-dissipation electronics cooling for ground transportation, a new design of two-phase loop has been improved in recent years: the capillary pumped loop for terrestrial application (CPLTA). This hybrid system, between the two standard capillary pumped loop (CPL) and loop heat pipe (LHP), has been widely investigated with a single evaporator, and so a single dissipative area, to know its mean operating principles and thermohydraulic couplings between the components. To aim to extend its scope of applications, a new experimental CPLTA with three evaporators in parallel is studied in this paper with methanol as working fluid. Even if the dynamics of the loop in multi-evaporators mode appears on the whole similar to that with a single operating evaporator, additional couplings are highlighted between the several evaporators. A decoupling between vapor generation flow rate and pressure drop in each evaporator is especially revealed. The impact of this phenomenon on the conductance at evaporator is analyzed.

  11. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    Science.gov (United States)

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  12. Analysis of technological, institutional and socioeconomic factors ...

    African Journals Online (AJOL)

    Analysis of technological, institutional and socioeconomic factors that influences poor reading culture among secondary school students in Nigeria. ... Proliferation and availability of smart phones, chatting culture and social media were identified as technological factors influencing poor reading culture among secondary ...

  13. Development of a parallel zoomed EVI sequence for high temporal resolution analysis of the BOLD response

    International Nuclear Information System (INIS)

    Rabrait, C.

    2006-01-01

    The hemodynamic impulse response to any short stimulus typically lasts around 20 seconds. Thus, the detection of the Blood Oxygenation Level Dependent (BOLD) effect is usually performed using a 2D Echo Planar Imaging (EPI) sequence, with repetition times on the order of 1 or 2 seconds. This temporal resolution is generally enough for detection purposes. Nevertheless, when trying to accurately estimate the hemodynamic response functions (HRF), higher scanning rates represent a real advantage. Thus, in order to reach a temporal resolution around 200 ms, we developed a new acquisition method, based on Echo Volumar Imaging and 2D parallel acquisition (1). Echo Volumar Imaging (EVI) has been proposed in 1977 by Mansfield (2). EVI intrinsically possesses a lot of advantages for functional neuroimaging, as a 3 D single shot acquisition method. Nevertheless, to date, only a few applications have been reported (3, 4). Actually, very restricting hardware requirements make EVI difficult to perform in satisfactory experimental conditions, even today. The critical point in EVI is the echo train duration, which is longer than in EPI, due to 3D acquisition. Indeed, at equal field of view and spatial resolutions, EVI echo train duration must be approximately equal to EPI echo train duration multiplied by the number of slices acquired in EPI. Consequently, EVI is much more sensitive than EPI to geometric distortions, which are related to phase errors, and also to signal losses, which are due to long echo times (TE). Thus, a first improvement has been brought by 'zoomed' or 'localized' EVI (5), which allows to focus on a small volume of interest and thus limit echo train durations compared to full FOV acquisitions.To reduce echo train durations, we chose to apply parallel acquisition. Moreover, since EVI is a 3D acquisition method, we are able to perform parallel acquisition and SENSE reconstruction along the two phase directions (6). The R = 4 under-sampling consists in the

  14. Parallel FE Electron-Photon Transport Analysis on 2-D Unstructured Mesh

    International Nuclear Information System (INIS)

    Drumm, C.R.; Lorenz, J.

    1999-01-01

    A novel solution method has been developed to solve the coupled electron-photon transport problem on an unstructured triangular mesh. Instead of tackling the first-order form of the linear Boltzmann equation, this approach is based on the second-order form in conjunction with the conventional multi-group discrete-ordinates approximation. The highly forward-peaked electron scattering is modeled with a multigroup Legendre expansion derived from the Goudsmit-Saunderson theory. The finite element method is used to treat the spatial dependence. The solution method is unique in that the space-direction dependence is solved simultaneously, eliminating the need for the conventional inner iterations, a method that is well suited for massively parallel computers

  15. A massively parallel GPU-accelerated model for analysis of fully nonlinear free surface waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Madsen, Morten G.; Glimberg, Stefan Lemvig

    2011-01-01

    -storage flexible-order accurate finite difference method that is known to be efficient and scalable on a CPU core (single thread). To achieve parallel performance of the relatively complex numerical model, we investigate a new trend in high-performance computing where many-core GPUs are utilized as high......-throughput co-processors to the CPU. We describe and demonstrate how this approach makes it possible to do fast desktop computations for large nonlinear wave problems in numerical wave tanks (NWTs) with close to 50/100 million total grid points in double/ single precision with 4 GB global device memory...... available. A new code base has been developed in C++ and compute unified device architecture C and is found to improve the runtime more than an order in magnitude in double precision arithmetic for the same accuracy over an existing CPU (single thread) Fortran 90 code when executed on a single modern GPU...

  16. Dynamic Analysis of Planar 3-RRR Flexible Parallel Robots with Dynamic Stiffening

    Directory of Open Access Journals (Sweden)

    Qinghua Zhang

    2014-01-01

    Full Text Available In consideration of the second-order coupling quantity of the axial displacement caused by the transverse displacement of flexible beam, the first-order approximation coupling model of planar 3-RRR flexible parallel robots is presented, in which the rigid body motion constraints, elastic deformation motion constraints, and dynamic constraints of the moving platform are considered. Based on the different speed of the moving platform, numerical simulation results using the conventional zero-order approximation coupling model and the proposed firstorder approximation coupling model show that the effect of “dynamic stiffening” term on dynamic characteristics of the system is insignificant and can be neglected, and the zero-order approximation coupling model is enough precisely for catching essentially dynamic characteristics of the system. Then, the commercial software ANSYS 13.0 is used to confirm the validity of the zero-order approximation coupling model.

  17. Nonlinear Elastodynamic Behaviour Analysis of High-Speed Spatial Parallel Coordinate Measuring Machines

    Directory of Open Access Journals (Sweden)

    Xiulong Chen

    2012-10-01

    Full Text Available In order to study the elastodynamic behaviour of 4- universal joints- prismatic pairs- spherical joints / universal joints- prismatic pairs- universal joints 4-UPS-UPU high-speed spatial PCMMs(parallel coordinate measuring machines, the nonlinear time-varying dynamics model, which comprehensively considers geometric nonlinearity and the rigid-flexible coupling effect, is derived by using Lagrange equations and finite element methods. Based on the Newmark method, the kinematics output response of 4-UPS-UPU PCMMs is illustrated through numerical simulation. The results of the simulation show that the flexibility of the links is demonstrated to have a significant impact on the system dynamics response. This research can provide the important theoretical base of the optimization design and vibration control for 4-UPS-UPU PCMMs.

  18. Epidermal growth factor receptor signalling in human breast cancer cells operates parallel to estrogen receptor α signalling and results in tamoxifen insensitive proliferation

    International Nuclear Information System (INIS)

    Moerkens, Marja; Zhang, Yinghui; Wester, Lynn; Water, Bob van de; Meerman, John HN

    2014-01-01

    Tamoxifen resistance is a major problem in the treatment of estrogen receptor (ER) α -positive breast cancer patients. Although the mechanisms behind tamoxifen resistance are still not completely understood, clinical data suggests that increased expression of receptor tyrosine kinases is involved. Here, we studied the estrogen and anti-estrogen sensitivity of human breast cancer MCF7 cells that have a moderate, retroviral-mediated, ectopic expression of epidermal growth factor receptor (MCF7-EGFR). Proliferation of MCF7-EGFR and parental cells was induced by 17β-estradiol (E2), epidermal growth factor (EGF) or a combination of these. Inhibition of proliferation under these conditions was investigated with 4-hydroxy-tamoxifen (TAM) or fulvestrant at 10 -12 to 10 -6 M. Cells were lysed at different time points to determine the phosphorylation status of EGFR, MAPK 1/3 , AKT and the expression of ERα. Knockdown of target genes was established using smartpool siRNAs. Transcriptomics analysis was done 6 hr after stimulation with growth factors using Affymetrix HG-U133 PM array plates. While proliferation of parental MCF7 cells could only be induced by E2, proliferation of MCF7-EGFR cells could be induced by either E2 or EGF. Treatment with TAM or fulvestrant did significantly inhibit proliferation of MCF7-EGFR cells stimulated with E2 alone. EGF treatment of E2/TAM treated cells led to a marked cell proliferation thereby overruling the anti-estrogen-mediated inhibition of cell proliferation. Under these conditions, TAM however did still inhibit ERα- mediated transcription. While siRNA-mediated knock-down of EGFR inhibited the EGF- driven proliferation under TAM/E2/EGF condition, knock down of ERα did not. The TAM resistant cell proliferation mediated by the conditional EGFR-signaling may be dependent on the PI3K/Akt pathway but not the MEK/MAPK pathway, since a MEK inhibitor (U0126), did not block the proliferation. Transcriptomic analysis under the various E2/TAM

  19. Further optimization of a parallel double-effect organosilicon distillation scheme through exergy analysis

    International Nuclear Information System (INIS)

    Sun, Jinsheng; Dai, Leilei; Shi, Ming; Gao, Hong; Cao, Xijia; Liu, Guangxin

    2014-01-01

    In our previous work, a significant improvement in organosilicon monomer distillation using parallel double-effect heat integration between a heavies removal column and six other columns, as well as heat integration between methyltrichlorosilane and dimethylchlorosilane columns, reduced the total exergy loss of the currently running counterpart by 40.41%. Further research regarding this optimized scheme demonstrated that it was necessary to reduce the higher operating pressure of the methyltrichlorosilane column, which is required for heat integration between the methyltrichlorosilane and dimethylchlorosilane columns. Therefore, in this contribution, a challenger scheme is presented with heat pumps introduced separately from the originally heat-coupled methyltrichlorosilane and dimethylchlorosilane columns in the above-mentioned optimized scheme, which is the prototype for this work. Both schemes are simulated using the same purity requirements used in running industrial units. The thermodynamic properties from the simulation are used to calculate the energy consumption and exergy loss of the two schemes. The results show that the heat pump option further reduces the flowsheet energy consumption and exergy loss by 27.35% and 10.98% relative to the prototype scheme. These results indicate that the heat pumps are superior to heat integration in the context of energy-savings during organosilicon monomer distillation. - Highlights: • Combine the paralleled double-effect and heat pump distillation to organosilicon distillation. • Compare the double-effect with the heat pump in saving energy. • Further cut down the flowsheet energy consumption and exergy loss by 27.35% and 10.98% respectively

  20. Modeling, analysis, and design of stationary reference frame droop controlled parallel three-phase voltage source inverters

    DEFF Research Database (Denmark)

    Vasquez, Juan Carlos; Guerrero, Josep M.; Savaghebi, Mehdi

    2011-01-01

    and discussed. Experimental results are provided to validate the performance and robustness of the VSIs functionality during Islanded and grid-connected operations, allowing a seamless transition between these modes through control hierarchies by regulating frequency and voltage, main-grid interactivity......Power electronics based microgrids consist of a number of voltage source inverters (VSIs) operating in parallel. In this paper, the modeling, control design, and stability analysis of three-phase VSIs are derived. The proposed voltage and current inner control loops and the mathematical models...

  1. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets

    Science.gov (United States)

    Shrimankar, D. D.; Sathe, S. R.

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today’s supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures. PMID:27932868

  2. Analysis of Parallel Algorithms on SMP Node and Cluster of Workstations Using Parallel Programming Models with New Tile-based Method for Large Biological Datasets.

    Science.gov (United States)

    Shrimankar, D D; Sathe, S R

    2016-01-01

    Sequence alignment is an important tool for describing the relationships between DNA sequences. Many sequence alignment algorithms exist, differing in efficiency, in their models of the sequences, and in the relationship between sequences. The focus of this study is to obtain an optimal alignment between two sequences of biological data, particularly DNA sequences. The algorithm is discussed with particular emphasis on time, speedup, and efficiency optimizations. Parallel programming presents a number of critical challenges to application developers. Today's supercomputer often consists of clusters of SMP nodes. Programming paradigms such as OpenMP and MPI are used to write parallel codes for such architectures. However, the OpenMP programs cannot be scaled for more than a single SMP node. However, programs written in MPI can have more than single SMP nodes. But such a programming paradigm has an overhead of internode communication. In this work, we explore the tradeoffs between using OpenMP and MPI. We demonstrate that the communication overhead incurs significantly even in OpenMP loop execution and increases with the number of cores participating. We also demonstrate a communication model to approximate the overhead from communication in OpenMP loops. Our results are astonishing and interesting to a large variety of input data files. We have developed our own load balancing and cache optimization technique for message passing model. Our experimental results show that our own developed techniques give optimum performance of our parallel algorithm for various sizes of input parameter, such as sequence size and tile size, on a wide variety of multicore architectures.

  3. Hand function evaluation: a factor analysis study.

    Science.gov (United States)

    Jarus, T; Poremba, R

    1993-05-01

    The purpose of this study was to investigate hand function evaluations. Factor analysis with varimax rotation was used to assess the fundamental characteristics of the items included in the Jebsen Hand Function Test and the Smith Hand Function Evaluation. The study sample consisted of 144 subjects without disabilities and 22 subjects with Colles fracture. Results suggest a four factor solution: Factor I--pinch movement; Factor II--grasp; Factor III--target accuracy; and Factor IV--activities of daily living. These categories differentiated the subjects without Colles fracture from the subjects with Colles fracture. A hand function evaluation consisting of these four factors would be useful. Such an evaluation that can be used for current clinical purposes is provided.

  4. Identification and analysis of common bean (Phaseolus vulgaris L. transcriptomes by massively parallel pyrosequencing

    Directory of Open Access Journals (Sweden)

    Thimmapuram Jyothi

    2011-10-01

    Full Text Available Abstract Background Common bean (Phaseolus vulgaris is the most important food legume in the world. Although this crop is very important to both the developed and developing world as a means of dietary protein supply, resources available in common bean are limited. Global transcriptome analysis is important to better understand gene expression, genetic variation, and gene structure annotation in addition to other important features. However, the number and description of common bean sequences are very limited, which greatly inhibits genome and transcriptome research. Here we used 454 pyrosequencing to obtain a substantial transcriptome dataset for common bean. Results We obtained 1,692,972 reads with an average read length of 207 nucleotides (nt. These reads were assembled into 59,295 unigenes including 39,572 contigs and 19,723 singletons, in addition to 35,328 singletons less than 100 bp. Comparing the unigenes to common bean ESTs deposited in GenBank, we found that 53.40% or 31,664 of these unigenes had no matches to this dataset and can be considered as new common bean transcripts. Functional annotation of the unigenes carried out by Gene Ontology assignments from hits to Arabidopsis and soybean indicated coverage of a broad range of GO categories. The common bean unigenes were also compared to the bean bacterial artificial chromosome (BAC end sequences, and a total of 21% of the unigenes (12,724 including 9,199 contigs and 3,256 singletons match to the 8,823 BAC-end sequences. In addition, a large number of simple sequence repeats (SSRs and transcription factors were also identified in this study. Conclusions This work provides the first large scale identification of the common bean transcriptome derived by 454 pyrosequencing. This research has resulted in a 150% increase in the number of Phaseolus vulgaris ESTs. The dataset obtained through this analysis will provide a platform for functional genomics in common bean and related legumes and

  5. Microenvironmental Heterogeneity Parallels Breast Cancer Progression: A Histology-Genomic Integration Analysis.

    Directory of Open Access Journals (Sweden)

    Rachael Natrajan

    2016-02-01

    Full Text Available The intra-tumor diversity of cancer cells is under intense investigation; however, little is known about the heterogeneity of the tumor microenvironment that is key to cancer progression and evolution. We aimed to assess the degree of microenvironmental heterogeneity in breast cancer and correlate this with genomic and clinical parameters.We developed a quantitative measure of microenvironmental heterogeneity along three spatial dimensions (3-D in solid tumors, termed the tumor ecosystem diversity index (EDI, using fully automated histology image analysis coupled with statistical measures commonly used in ecology. This measure was compared with disease-specific survival, key mutations, genome-wide copy number, and expression profiling data in a retrospective study of 510 breast cancer patients as a test set and 516 breast cancer patients as an independent validation set. In high-grade (grade 3 breast cancers, we uncovered a striking link between high microenvironmental heterogeneity measured by EDI and a poor prognosis that cannot be explained by tumor size, genomics, or any other data types. However, this association was not observed in low-grade (grade 1 and 2 breast cancers. The prognostic value of EDI was superior to known prognostic factors and was enhanced with the addition of TP53 mutation status (multivariate analysis test set, p = 9 × 10-4, hazard ratio = 1.47, 95% CI 1.17-1.84; validation set, p = 0.0011, hazard ratio = 1.78, 95% CI 1.26-2.52. Integration with genome-wide profiling data identified losses of specific genes on 4p14 and 5q13 that were enriched in grade 3 tumors with high microenvironmental diversity that also substratified patients into poor prognostic groups. Limitations of this study include the number of cell types included in the model, that EDI has prognostic value only in grade 3 tumors, and that our spatial heterogeneity measure was dependent on spatial scale and tumor size.To our knowledge, this is the first

  6. High sensitivity and high Q-factor nanoslotted parallel quadrabeam photonic crystal cavity for real-time and label-free sensing

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Daquan [Rowland Institute at Harvard University, Cambridge, Massachusetts 02142 (United States); State Key Laboratory of Information Photonics and Optical Communications, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876 (China); School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138 (United States); Kita, Shota; Wang, Cheng; Lončar, Marko [School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138 (United States); Liang, Feng; Quan, Qimin [Rowland Institute at Harvard University, Cambridge, Massachusetts 02142 (United States); Tian, Huiping; Ji, Yuefeng [State Key Laboratory of Information Photonics and Optical Communications, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876 (China)

    2014-08-11

    We experimentally demonstrate a label-free sensor based on nanoslotted parallel quadrabeam photonic crystal cavity (NPQC). The NPQC possesses both high sensitivity and high Q-factor. We achieved sensitivity (S) of 451 nm/refractive index unit and Q-factor >7000 in water at telecom wavelength range, featuring a sensor figure of merit >2000, an order of magnitude improvement over the previous photonic crystal sensors. In addition, we measured the streptavidin-biotin binding affinity and detected 10 ag/mL concentrated streptavidin in the phosphate buffered saline solution.

  7. In-situ Isotopic Analysis at Nanoscale using Parallel Ion Electron Spectrometry: A Powerful New Paradigm for Correlative Microscopy

    Science.gov (United States)

    Yedra, Lluís; Eswara, Santhana; Dowsett, David; Wirtz, Tom

    2016-01-01

    Isotopic analysis is of paramount importance across the entire gamut of scientific research. To advance the frontiers of knowledge, a technique for nanoscale isotopic analysis is indispensable. Secondary Ion Mass Spectrometry (SIMS) is a well-established technique for analyzing isotopes, but its spatial-resolution is fundamentally limited. Transmission Electron Microscopy (TEM) is a well-known method for high-resolution imaging down to the atomic scale. However, isotopic analysis in TEM is not possible. Here, we introduce a powerful new paradigm for in-situ correlative microscopy called the Parallel Ion Electron Spectrometry by synergizing SIMS with TEM. We demonstrate this technique by distinguishing lithium carbonate nanoparticles according to the isotopic label of lithium, viz. 6Li and 7Li and imaging them at high-resolution by TEM, adding a new dimension to correlative microscopy. PMID:27350565

  8. Integrating human factors into process hazard analysis

    International Nuclear Information System (INIS)

    Kariuki, S.G.; Loewe, K.

    2007-01-01

    A comprehensive process hazard analysis (PHA) needs to address human factors. This paper describes an approach that systematically identifies human error in process design and the human factors that influence its production and propagation. It is deductive in nature and therefore considers human error as a top event. The combinations of different factors that may lead to this top event are analysed. It is qualitative in nature and is used in combination with other PHA methods. The method has an advantage because it does not look at the operator error as the sole contributor to the human failure within a system but a combination of all underlying factors

  9. Analysis of a fully developed laminar flow b/w two parallel plates ...

    African Journals Online (AJOL)

    ... Simulation Software Comsol Multiphysics. The flow behavior and the interaction with the boundary has been analysed. Wall no slip conditions were set for evaluation purpose. The analysis is a steady state analysis by using Incompressible Navier Stokes Model. Keywords: Steady state analysis, Velocity profile, Fluid flow.

  10. Analysis and control of a parallel lower limb based on pneumatic artificial muscles

    Directory of Open Access Journals (Sweden)

    Feilong Jiang

    2016-12-01

    Full Text Available Most robots that are actuated by antagonistic pneumatic artificial muscles are controlled by various control algorithms that cannot adequately imitate the actual muscle distribution of human limbs. Other robots in which the distribution of pneumatic artificial muscle is similar to that of human limbs can only analyze the position of the robot using perceptual data instead of rational knowledge. In order to better imitate the movement of a human limb, the article proposes a humanoid lower limb in the form of a parallel mechanism where muscle is unevenly distributed. Next, the kinematic and dynamic movements of bionic hip joint are analyzed, where the joint movement is controlled by an observer-based fuzzy adaptive control algorithm as a whole rather than each individual pneumatic artificial muscle and parameters that are optimized by a neural network. Finally, experimental results are provided to confirm the effectiveness of the proposed method. We also document the role of muscle in trajectory tracking for the piriformis and musculi obturator internus in isobaric processes.

  11. Asymptotic analysis of the average, steady, isotherml flow in coupled, parallel channels

    International Nuclear Information System (INIS)

    Lund, K.O.

    1976-01-01

    The conservation equations of mass and momentum are derived for the average flow of gases in coupled, parallel channels, or rod bundles. In the case of gas-cooled rod bundles the pitch of the rods is relatively large so the flows in the channels are strongly coupled. From this observation a perturbation parameter is derived and the descriptive equations are scaled using this parameter, which represents the ratio of the axial flow area to the transverse flow area, and which is of the order of 10 -3 in current gas-cooled fast breeder reactor designs. By expanding the velocities into perturbation series the equations for two channels are solved as an initial value problem, and the results compared to a finite difference solution of the same problem. The N-channel problem is solved to the lowest order as a two-point boundary value problem with the pressures specified at the inlet and the outlet. It is concluded from the study that asymptotic methods are effective in solving the flow problems of rod bundles; however, further work is required to evaluate the possible computational advantages of the methods

  12. Feasibility Study of Parallel Finite Element Analysis on Cluster-of-Clusters

    Science.gov (United States)

    Muraoka, Masae; Okuda, Hiroshi

    With the rapid growth of WAN infrastructure and development of Grid middleware, it's become a realistic and attractive methodology to connect cluster machines on wide-area network for the execution of computation-demanding applications. Many existing parallel finite element (FE) applications have been, however, designed and developed with a single computing resource in mind, since such applications require frequent synchronization and communication among processes. There have been few FE applications that can exploit the distributed environment so far. In this study, we explore the feasibility of FE applications on the cluster-of-clusters. First, we classify FE applications into two types, tightly coupled applications (TCA) and loosely coupled applications (LCA) based on their communication pattern. A prototype of each application is implemented on the cluster-of-clusters. We perform numerical experiments executing TCA and LCA on both the cluster-of-clusters and a single cluster. Thorough these experiments, by comparing the performances and communication cost in each case, we evaluate the feasibility of FEA on the cluster-of-clusters.

  13. Graph Grammar-Based Multi-Frontal Parallel Direct Solver for Two-Dimensional Isogeometric Analysis

    KAUST Repository

    Kuźnik, Krzysztof

    2012-06-02

    This paper introduces the graph grammar based model for developing multi-thread multi-frontal parallel direct solver for two dimensional isogeometric finite element method. Execution of the solver algorithm has been expressed as the sequence of graph grammar productions. At the beginning productions construct the elimination tree with leaves corresponding to finite elements. Following sequence of graph grammar productions generates element frontal matri-ces at leaf nodes, merges matrices at parent nodes and eliminates rows corresponding to fully assembled degrees of freedom. Finally, there are graph grammar productions responsible for root problem solution and recursive backward substitutions. Expressing the solver algorithm by graph grammar productions allows us to explore the concurrency of the algorithm. The graph grammar productions are grouped into sets of independent tasks that can be executed concurrently. The resulting concurrent multi-frontal solver algorithm is implemented and tested on NVIDIA GPU, providing O(NlogN) execution time complexity where N is the number of degrees of freedom. We have confirmed this complexity by solving up to 1 million of degrees of freedom with 448 cores GPU.

  14. Parallel inter channel interaction mechanisms

    International Nuclear Information System (INIS)

    Jovic, V.; Afgan, N.; Jovic, L.

    1995-01-01

    Parallel channels interactions are examined. For experimental researches of nonstationary regimes flow in three parallel vertical channels results of phenomenon analysis and mechanisms of parallel channel interaction for adiabatic condition of one-phase fluid and two-phase mixture flow are shown. (author)

  15. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    Science.gov (United States)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative

  16. Thermogravimetric analysis and kinetic study of bamboo waste treated by Echinodontium taxodii using a modified three-parallel-reactions model.

    Science.gov (United States)

    Yu, Hongbo; Liu, Fang; Ke, Ming; Zhang, Xiaoyu

    2015-06-01

    In this study, the effect of pretreatment with Echinodontium taxodii on thermal decomposition characteristics and kinetics of bamboo wastes was investigated by thermogravimetric analysis. The results showed fungal pretreatment can enhance the thermal degradation of bamboo. The negative effect of extractives in bamboo on the thermal decomposition can be decreased by the pretreatment. A modified three-parallel-reactions model based on isolated lignin was firstly proposed to study pyrolysis kinetics of bamboo lignocellulose. Kinetic analysis showed that with increasing pretreatment time fungal delignification was enhanced to transform the lignin component with high activation energy into that with low activation energy and raise the cellulose content in bamboo, making the thermal decomposition easier. These results demonstrated fungal pretreatment provided a potential way to improve thermal conversion efficiency of bamboo. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. The Modeling and Harmonic Coupling Analysis of Multiple-Parallel Connected Inverter Using Harmonic State Space (HSS)

    DEFF Research Database (Denmark)

    Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth

    2015-01-01

    As the number of power electronics based systems are increasing, studies about overall stability and harmonic problems are rising. In order to analyze harmonics and stability, most research is using an analysis method, which is based on the Linear Time Invariant (LTI) approach. However, this can...... be difficult in terms of complex multi-parallel connected systems, especially in the case of renewable energy, where possibilities for intermittent operation due to the weather conditions exist. Hence, it can bring many different operating points to the power converter, and the impedance characteristics can...... can demonstrate other phenomenon, which can not be found in the conventional LTI approach. The theoretical modeling and analysis are verified by means of simulations and experiments....

  18. Analysis of Economic Factors Affecting Stock Market

    OpenAIRE

    Xie, Linyin

    2010-01-01

    This dissertation concentrates on analysis of economic factors affecting Chinese stock market through examining relationship between stock market index and economic factors. Six economic variables are examined: industrial production, money supply 1, money supply 2, exchange rate, long-term government bond yield and real estate total value. Stock market comprises fixed interest stocks and equities shares. In this dissertation, stock market is restricted to equity market. The stock price in thi...

  19. Parallel Programming Application to Matrix Algebra in the Spectral Method for Control Systems Analysis, Synthesis and Identification

    Directory of Open Access Journals (Sweden)

    V. Yu. Kleshnin

    2016-01-01

    Full Text Available The article describes the matrix algebra libraries based on the modern technologies of parallel programming for the Spectrum software, which can use a spectral method (in the spectral form of mathematical description to analyse, synthesise and identify deterministic and stochastic dynamical systems. The developed matrix algebra libraries use the following technologies for the GPUs: OmniThreadLibrary, OpenMP, Intel Threading Building Blocks, Intel Cilk Plus for CPUs nVidia CUDA, OpenCL, and Microsoft Accelerated Massive Parallelism.The developed libraries support matrices with real elements (single and double precision. The matrix dimensions are limited by 32-bit or 64-bit memory model and computer configuration. These libraries are general-purpose and can be used not only for the Spectrum software. They can also find application in the other projects where there is a need to perform operations with large matrices.The article provides a comparative analysis of the libraries developed for various matrix operations (addition, subtraction, scalar multiplication, multiplication, powers of matrices, tensor multiplication, transpose, inverse matrix, finding a solution of the system of linear equations through the numerical experiments using different CPU and GPU. The article contains sample programs and performance test results for matrix multiplication, which requires most of all computational resources in regard to the other operations.

  20. Open Source Parallel Image Analysis and Machine Learning Pipeline, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Continuum Analytics proposes a Python-based open-source data analysis machine learning pipeline toolkit for satellite data processing, weather and climate data...

  1. Open Source Parallel Image Analysis and Machine Learning Pipeline, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Today, NASA researchers must create, debug, and tune custom workflows for each analysis. Creation and modification of custom workflows is fragile, non-portable and...

  2. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab; Meseguer, José

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability

  3. Improving Systematic Constraint-driven Analysis Using Incremental and Parallel Techniques

    Science.gov (United States)

    2012-05-01

    predicates etc.) and have not been shown to scale to checking applications, which Korat readily handles. The Alloy Analyzer uses the Kodkod tool [110...specifications within given bounds. It uses Kodkod for its analysis. JForge translates an imperative Java program to its declarative equivalent. We...Jackson. “ Kodkod : A Relational Model Finder”. In: Proc. International Conference on Tools and Algorithms for the Con- struction and Analysis of Systems

  4. Factor Economic Analysis at Forestry Enterprises

    Directory of Open Access Journals (Sweden)

    M.Yu. Chik

    2018-03-01

    Full Text Available The article studies the importance of economic analysis according to the results of research of scientific works of domestic and foreign scientists. The calculation of the influence of factors on the change in the cost of harvesting timber products by cost items has been performed. The results of the calculation of the influence of factors on the change of costs on 1 UAH are determined using the full cost of sold products. The variable and fixed costs and their distribution are allocated that influences the calculation of the impact of factors on cost changes on 1 UAH of sold products. The paper singles out the general results of calculating the influence of factors on cost changes on 1 UAH of sold products. According to the results of the analysis, the list of reserves for reducing the cost of production at forest enterprises was proposed. The main sources of reserves for reducing the prime cost of forest products at forest enterprises are investigated based on the conducted factor analysis.

  5. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling.

    Science.gov (United States)

    Núñez, M; Robie, T; Vlachos, D G

    2017-10-28

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  6. An SPSSR -Menu for Ordinal Factor Analysis

    Directory of Open Access Journals (Sweden)

    Mario Basto

    2012-01-01

    Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.

  7. Voltage-spike analysis for a free-running parallel inverter

    Science.gov (United States)

    Lee, F. C. Y.; Wilson, T. G.

    1974-01-01

    Unwanted and sometimes damaging high-amplitude voltage spikes occur during each half cycle in many transistor saturable-core inverters at the moment when the core saturates and the transistors switch. The analysis shows that spikes are an intrinsic characteristic of certain types of inverters even with negligible leakage inductance and purely resistive load. The small but unavoidable after-saturation inductance of the saturable-core transformer plays an essential role in creating these undesired thigh-voltage spikes. State-plane analysis provides insight into the complex interaction between core and transistors, and shows the circuit parameters upon which the magnitude of these spikes depends.

  8. Magnetic Field Emission Comparison at Different Quality Factors with Series-Parallel Compensation Network for Wireless Power Transfer to Vehicles

    DEFF Research Database (Denmark)

    Batra, Tushar; Schaltz, Erik

    2014-01-01

    to the surroundings also increase with increase in the quality factor. In this paper, first analytical expressions are developed for comparing magnetic emissions at different quality factors. Theoretical and simulation (Comsol) results show comparatively lower increase for the magnetic field emissions to the linear...

  9. Parallel computation

    International Nuclear Information System (INIS)

    Jejcic, A.; Maillard, J.; Maurel, G.; Silva, J.; Wolff-Bacha, F.

    1997-01-01

    The work in the field of parallel processing has developed as research activities using several numerical Monte Carlo simulations related to basic or applied current problems of nuclear and particle physics. For the applications utilizing the GEANT code development or improvement works were done on parts simulating low energy physical phenomena like radiation, transport and interaction. The problem of actinide burning by means of accelerators was approached using a simulation with the GEANT code. A program of neutron tracking in the range of low energies up to the thermal region has been developed. It is coupled to the GEANT code and permits in a single pass the simulation of a hybrid reactor core receiving a proton burst. Other works in this field refers to simulations for nuclear medicine applications like, for instance, development of biological probes, evaluation and characterization of the gamma cameras (collimators, crystal thickness) as well as the method for dosimetric calculations. Particularly, these calculations are suited for a geometrical parallelization approach especially adapted to parallel machines of the TN310 type. Other works mentioned in the same field refer to simulation of the electron channelling in crystals and simulation of the beam-beam interaction effect in colliders. The GEANT code was also used to simulate the operation of germanium detectors designed for natural and artificial radioactivity monitoring of environment

  10. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  11. FDTD parallel computational analysis of grid-type scattering filter characteristics for medical X-ray image diagnosis

    International Nuclear Information System (INIS)

    Takahashi, Koichi; Miyazaki, Yasumitsu; Goto, Nobuo

    2007-01-01

    X-ray diagnosis depends on the intensity of transmitted and scattered waves in X-ray propagation in biomedical media. X-ray is scattered and absorbed by tissues, such as fat, bone and internal organs. However, image processing for medical diagnosis, based on the scattering and absorption characteristics of these tissues in X-ray spectrum is not so much studied. To obtain precise information of tissues in a living body, the accurate characteristics of scattering and absorption are required. In this paper, X-ray scattering and absorption in biomedical media are studied using 2-dimensional finite difference time domain (FDTD) method. In FDTD method, the size of analysis space is very limited by the performance of available computers. To overcome this limitation, parallel and successive FDTD method is introduced. As a result of computer simulation, the amplitude of transmitted and scattered waves are presented numerically. The fundamental filtering characteristics of grid-type filter are also shown numerically. (author)

  12. Workspace quality analysis and application for a completely restrained 3-Dof planar cable-driven parallel manipulator

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Xiaoqiang; Tang, Lewei; Wang, Jinsong [Tsinghua University, Beijing (China); Sun, Dengfeng [Purdue University, West Lafayette (United States)

    2013-08-15

    With the advantage of large workspace, low energy consumption and small inertia, the cable-driven parallel manipulator (CDPM) is suitable for heavy workpieces in rapid velocity and acceleration. We present a workspace analysis approach to solve force and torque equilibriums of completely restrained CDPMs. By this approach, not only the distribution but also the value of tensions driven by cables is investigated together. Two new indices, all cable tension distribution index (ACTDI) and area of the global quality workspace (AG) are proposed to evaluate the quality of the workspace. By concentrating on the workspace and its quality combined with the tension characteristics, these criteria are used to determine the optimal workspace in CDPMs. To verify the capacity of the proposed method, simulation examples are presented and the results demonstrate the approach's effectiveness. In the end, the dimensional design for a planar CDPM is discussed with the indices of workspace quality.

  13. Workspace quality analysis and application for a completely restrained 3-Dof planar cable-driven parallel manipulator

    International Nuclear Information System (INIS)

    Tang, Xiaoqiang; Tang, Lewei; Wang, Jinsong; Sun, Dengfeng

    2013-01-01

    With the advantage of large workspace, low energy consumption and small inertia, the cable-driven parallel manipulator (CDPM) is suitable for heavy workpieces in rapid velocity and acceleration. We present a workspace analysis approach to solve force and torque equilibriums of completely restrained CDPMs. By this approach, not only the distribution but also the value of tensions driven by cables is investigated together. Two new indices, all cable tension distribution index (ACTDI) and area of the global quality workspace (AG) are proposed to evaluate the quality of the workspace. By concentrating on the workspace and its quality combined with the tension characteristics, these criteria are used to determine the optimal workspace in CDPMs. To verify the capacity of the proposed method, simulation examples are presented and the results demonstrate the approach's effectiveness. In the end, the dimensional design for a planar CDPM is discussed with the indices of workspace quality.

  14. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose

  15. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  16. Factor analysis for exercise stress radionuclide ventriculography

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Yasuda, Mitsutaka; Oku, Hisao; Ikuno, Yoshiyasu; Takeuchi, Kazuhide; Takeda, Tadanao; Ochi, Hironobu

    1987-01-01

    Using factor analysis, a new image processing in exercise stress radionuclide ventriculography, changes in factors associated with exercise were evaluated in 14 patients with angina pectoris or old myocardial infarction. The patients were imaged in the left anterior oblique projection, and three factor images were presented on a color coded scale. Abnormal factors (AF) were observed in 6 patients before exercise, 13 during exercise, and 4 after exercise. In 7 patients, the occurrence of AF was associated with exercise. Five of them became free from AF after exercise. Three patients showing AF before exercise had aggravation of AF during exercise. Overall, the occurrence or aggravation of AF was associated with exercise in ten (71 %) of the patients. The other three patients, however, had disappearance of AF during exercise. In the last patient, none of the AF was observed throughout the study. In view of a high incidence of AF associated with exercise, the factor analysis may have the potential in evaluating cardiac reverse from the viewpoint of left ventricular wall motion abnormality. (Namekawa, K.)

  17. Suicidality: risk factors and the effects of antidepressants. The example of parallel reduction of suicidality and other depressive symptoms during treatment with the SNRI, milnacipran

    Directory of Open Access Journals (Sweden)

    Philippe Courtet

    2010-08-01

    Full Text Available Philippe CourtetCHRU Montpellier, Inserm U888, University of Montpellier I, Montpellier, FranceAbstract: Suicidal behavior (SB represents a major public health issue. Clinical and basic research suggests that SB is a specific entity in psychiatric nosology involving a combination of personality traits, genetic factors, childhood abuse and neuroanatomical abnormalities. The principal risk factor for suicide is depression. More than 60% of patients who complete suicide are depressed at the time of suicide, most of them untreated. There has been a controversy concerning a possible increased risk of SB in some depressed patients treated with antidepressants. Most recent evidence suggests, however, that treatment of depressed patients is associated with a favorable benefit-risk ratio. A recent study has determined the effects of 6 weeks of antidepressant treatment with the serotonin and norepinephrine reuptake inhibitor, milnacipran, on suicidality in a cohort of 30 patients with mild to moderate depression. At baseline, mild suicidal thoughts were present in 46.7% of patients. Suicidal thoughts decreased progressively throughout the study in parallel with other depressive symptoms and were essentially absent at the end of the study. At no time during treatment was there any indication of an increased suicidal risk. Retardation and psychic anxiety decreased in parallel possibly explaining the lack of any “activation syndrome” in this study.Keywords: suicide, milnacipran, SNRI, activation syndrome

  18. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1980-01-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of the energy loss of the incident particle with penetration depth, and X-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle. (orig.)

  19. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  20. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1979-06-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of energy loss of the incident particle with penetration depth, and x-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle.(Author) [pt

  1. A parallel and sensitive software tool for methylation analysis on multicore platforms.

    Science.gov (United States)

    Tárraga, Joaquín; Pérez, Mariano; Orduña, Juan M; Duato, José; Medina, Ignacio; Dopazo, Joaquín

    2015-10-01

    DNA methylation analysis suffers from very long processing time, as the advent of Next-Generation Sequencers has shifted the bottleneck of genomic studies from the sequencers that obtain the DNA samples to the software that performs the analysis of these samples. The existing software for methylation analysis does not seem to scale efficiently neither with the size of the dataset nor with the length of the reads to be analyzed. As it is expected that the sequencers will provide longer and longer reads in the near future, efficient and scalable methylation software should be developed. We present a new software tool, called HPG-Methyl, which efficiently maps bisulphite sequencing reads on DNA, analyzing DNA methylation. The strategy used by this software consists of leveraging the speed of the Burrows-Wheeler Transform to map a large number of DNA fragments (reads) rapidly, as well as the accuracy of the Smith-Waterman algorithm, which is exclusively employed to deal with the most ambiguous and shortest reads. Experimental results on platforms with Intel multicore processors show that HPG-Methyl significantly outperforms in both execution time and sensitivity state-of-the-art software such as Bismark, BS-Seeker or BSMAP, particularly for long bisulphite reads. Software in the form of C libraries and functions, together with instructions to compile and execute this software. Available by sftp to anonymous@clariano.uv.es (password 'anonymous'). juan.orduna@uv.es or jdopazo@cipf.es. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2003-07-25

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports (BSC 2003 [DIRS 160964]; BSC 2003 [DIRS 160965]; BSC 2003 [DIRS 160976]; BSC 2003 [DIRS 161239]; BSC 2003 [DIRS 161241]) contain detailed description of the model input parameters. This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs and conversion factors for the TSPA. The BDCFs will be used in performance assessment for calculating annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose from beta- and photon-emitting radionuclides.

  3. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2005-04-28

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis

  4. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objectives of this analysis are to develop BDCFs for the

  5. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  6. Steady state flow analysis of two-phase natural circulation in multiple parallel channel loop

    International Nuclear Information System (INIS)

    Bhusare, V.H.; Bagul, R.K.; Joshi, J.B.; Nayak, A.K.; Kannan, Umasankari; Pilkhwal, D.S.; Vijayan, P.K.

    2016-01-01

    Highlights: • Liquid circulation velocity increases with increasing superficial gas velocity. • Total two-phase pressure drop decreases with increasing superficial gas velocity. • Channels with larger driving force have maximum circulation velocities. • Good agreement between experimental and model predictions. - Abstract: In this work, steady state flow analysis has been carried out experimentally in order to estimate the liquid circulation velocities and two-phase pressure drop in air–water multichannel circulating loop. Experiments were performed in 15 channel circulating loop. Single phase and two-phase pressure drops in the channels have been measured experimentally and have been compared with theoretical model of Joshi et al. (1990). Experimental measurements show good agreement with model.

  7. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  8. Genetic particle swarm parallel algorithm analysis of optimization arrangement on mistuned blades

    Science.gov (United States)

    Zhao, Tianyu; Yuan, Huiqun; Yang, Wenjun; Sun, Huagang

    2017-12-01

    This article introduces a method of mistuned parameter identification which consists of static frequency testing of blades, dichotomy and finite element analysis. A lumped parameter model of an engine bladed-disc system is then set up. A bladed arrangement optimization method, namely the genetic particle swarm optimization algorithm, is presented. It consists of a discrete particle swarm optimization and a genetic algorithm. From this, the local and global search ability is introduced. CUDA-based co-evolution particle swarm optimization, using a graphics processing unit, is presented and its performance is analysed. The results show that using optimization results can reduce the amplitude and localization of the forced vibration response of a bladed-disc system, while optimization based on the CUDA framework can improve the computing speed. This method could provide support for engineering applications in terms of effectiveness and efficiency.

  9. Confirmatory factor analysis using Microsoft Excel.

    Science.gov (United States)

    Miles, Jeremy N V

    2005-11-01

    This article presents a method for using Microsoft (MS) Excel for confirmatory factor analysis (CFA). CFA is often seen as an impenetrable technique, and thus, when it is taught, there is frequently little explanation of the mechanisms or underlying calculations. The aim of this article is to demonstrate that this is not the case; it is relatively straightforward to produce a spreadsheet in MS Excel that can carry out simple CFA. It is possible, with few or no programming skills, to effectively program a CFA analysis and, thus, to gain insight into the workings of the procedure.

  10. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  11. Control rod drop transient analysis with the coupled parallel code pCTF-PARCSv2.7

    International Nuclear Information System (INIS)

    Ramos, Enrique; Roman, Jose E.; Abarca, Agustín; Miró, Rafael; Bermejo, Juan A.

    2016-01-01

    Highlights: • An MPI parallel version of the thermal–hydraulic subchannel code COBRA-TF has been developed. • The parallel code has been coupled to the 3D neutron diffusion code PARCSv2.7. • The new codes are validated with a control rod drop transient. - Abstract: In order to reduce the response time when simulating large reactors in detail, a parallel version of the thermal–hydraulic subchannel code COBRA-TF (CTF) has been developed using the standard Message Passing Interface (MPI). The parallelization is oriented to reactor cells, so it is best suited for models consisting of many cells. The generation of the Jacobian matrix is parallelized, in such a way that each processor is in charge of generating the data associated with a subset of cells. Also, the solution of the linear system of equations is done in parallel, using the PETSc toolkit. With the goal of creating a powerful tool to simulate the reactor core behavior during asymmetrical transients, the 3D neutron diffusion code PARCSv2.7 (PARCS) has been coupled with the parallel version of CTF (pCTF) using the Parallel Virtual Machine (PVM) technology. In order to validate the correctness of the parallel coupled code, a control rod drop transient has been simulated comparing the results with the real experimental measures acquired during an NPP real test.

  12. Targeted capture massively parallel sequencing analysis of LCIS and invasive lobular cancer: Repertoire of somatic genetic alterations and clonal relationships.

    Science.gov (United States)

    Sakr, Rita A; Schizas, Michail; Carniello, Jose V Scarpa; Ng, Charlotte K Y; Piscuoglio, Salvatore; Giri, Dilip; Andrade, Victor P; De Brot, Marina; Lim, Raymond S; Towers, Russell; Weigelt, Britta; Reis-Filho, Jorge S; King, Tari A

    2016-02-01

    Lobular carcinoma in situ (LCIS) has been proposed as a non-obligate precursor of invasive lobular carcinoma (ILC). Here we sought to define the repertoire of somatic genetic alterations in pure LCIS and in synchronous LCIS and ILC using targeted massively parallel sequencing. DNA samples extracted from microdissected LCIS, ILC and matched normal breast tissue or peripheral blood from 30 patients were subjected to massively parallel sequencing targeting all exons of 273 genes, including the genes most frequently mutated in breast cancer and DNA repair-related genes. Single nucleotide variants and insertions and deletions were identified using state-of-the-art bioinformatics approaches. The constellation of somatic mutations found in LCIS (n = 34) and ILC (n = 21) were similar, with the most frequently mutated genes being CDH1 (56% and 66%, respectively), PIK3CA (41% and 52%, respectively) and CBFB (12% and 19%, respectively). Among 19 LCIS and ILC synchronous pairs, 14 (74%) had at least one identical mutation in common, including identical PIK3CA and CDH1 mutations. Paired analysis of independent foci of LCIS from 3 breasts revealed at least one common mutation in each of the 3 pairs (CDH1, PIK3CA, CBFB and PKHD1L1). LCIS and ILC have a similar repertoire of somatic mutations, with PIK3CA and CDH1 being the most frequently mutated genes. The presence of identical mutations between LCIS-LCIS and LCIS-ILC pairs demonstrates that LCIS is a clonal neoplastic lesion, and provides additional evidence that at least some LCIS are non-obligate precursors of ILC. Copyright © 2015 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  13. Expert opinion on laparoscopic surgery for colorectal cancer parallels evidence from a cumulative meta-analysis of randomized controlled trials.

    Directory of Open Access Journals (Sweden)

    Guillaume Martel

    Full Text Available This study sought to synthesize survival outcomes from trials of laparoscopic and open colorectal cancer surgery, and to determine whether expert acceptance of this technology in the literature has parallel cumulative survival evidence.A systematic review of randomized trials was conducted. The primary outcome was survival, and meta-analysis of time-to-event data was conducted. Expert opinion in the literature (published reviews, guidelines, and textbook chapters on the acceptability of laparoscopic colorectal cancer was graded using a 7-point scale. Pooled survival data were correlated in time with accumulating expert opinion scores.A total of 5,800 citations were screened. Of these, 39 publications pertaining to 23 individual trials were retained. As well, 414 reviews were included (28 guidelines, 30 textbook chapters, 20 systematic reviews, 336 narrative reviews. In total, 5,782 patients were randomized to laparoscopic (n = 3,031 and open (n = 2,751 colorectal surgery. Survival data were presented in 16 publications. Laparoscopic surgery was not inferior to open surgery in terms of overall survival (HR = 0.94, 95% CI 0.80, 1.09. Expert opinion in the literature pertaining to the oncologic acceptability of laparoscopic surgery for colon cancer correlated most closely with the publication of large RCTs in 2002-2004. Although increasingly accepted since 2006, laparoscopic surgery for rectal cancer remained controversial.Laparoscopic surgery for colon cancer is non-inferior to open surgery in terms of overall survival, and has been so since 2004. The majority expert opinion in the literature has considered these two techniques to be equivalent since 2002-2004. Laparoscopic surgery for rectal cancer has been increasingly accepted since 2006, but remains controversial. Knowledge translation efforts in this field appear to have paralleled the accumulation of clinical trial evidence.

  14. Factors Affecting Student Choice of Career in Science and Engineering: Parallel Studies in Australia, Canada, China, England, Japan and Portugal.

    Science.gov (United States)

    Woolnough, Brian E.; Guo, Yuying; Leite, Maria Salete; Jose de Almeida, Maria; Ryu, Tae; Wang, Zhen; Young, Deidra

    1997-01-01

    Describes studies that utilized questionnaires and interviews to explore the factors affecting the career choices of students. Reveals differences between scientists and nonscientists with regard to their preferred learning styles and relates these differences to career choice and self-perception. (DDR)

  15. Analysis of Plane-Parallel Electron Beam Propagation in Different Media by Numerical Simulation Methods

    Science.gov (United States)

    Miloichikova, I. A.; Bespalov, V. I.; Krasnykh, A. A.; Stuchebrov, S. G.; Cherepennikov, Yu. M.; Dusaev, R. R.

    2018-04-01

    Simulation by the Monte Carlo method is widely used to calculate the character of ionizing radiation interaction with substance. A wide variety of programs based on the given method allows users to choose the most suitable package for solving computational problems. In turn, it is important to know exactly restrictions of numerical systems to avoid gross errors. Results of estimation of the feasibility of application of the program PCLab (Computer Laboratory, version 9.9) for numerical simulation of the electron energy distribution absorbed in beryllium, aluminum, gold, and water for industrial, research, and clinical beams are presented. The data obtained using programs ITS and Geant4 being the most popular software packages for solving the given problems and the program PCLab are presented in the graphic form. A comparison and an analysis of the results obtained demonstrate the feasibility of application of the program PCLab for simulation of the absorbed energy distribution and dose of electrons in various materials for energies in the range 1-20 MeV.

  16. Parallel mRNA, proteomics and miRNA expression analysis in cell line models of the intestine.

    Science.gov (United States)

    O'Sullivan, Finbarr; Keenan, Joanne; Aherne, Sinead; O'Neill, Fiona; Clarke, Colin; Henry, Michael; Meleady, Paula; Breen, Laura; Barron, Niall; Clynes, Martin; Horgan, Karina; Doolan, Padraig; Murphy, Richard

    2017-11-07

    To identify miRNA-regulated proteins differentially expressed between Caco2 and HT-29: two principal cell line models of the intestine. Exponentially growing Caco-2 and HT-29 cells were harvested and prepared for mRNA, miRNA and proteomic profiling. mRNA microarray profiling analysis was carried out using the Affymetrix GeneChip Human Gene 1.0 ST array. miRNA microarray profiling analysis was carried out using the Affymetrix Genechip miRNA 3.0 array. Quantitative Label-free LC-MS/MS proteomic analysis was performed using a Dionex Ultimate 3000 RSLCnano system coupled to a hybrid linear ion trap/Orbitrap mass spectrometer. Peptide identities were validated in Proteome Discoverer 2.1 and were subsequently imported into Progenesis QI software for further analysis. Hierarchical cluster analysis for all three parallel datasets (miRNA, proteomics, mRNA) was conducted in the R software environment using the Euclidean distance measure and Ward's clustering algorithm. The prediction of miRNA and oppositely correlated protein/mRNA interactions was performed using TargetScan 6.1. GO biological process, molecular function and cellular component enrichment analysis was carried out for the DE miRNA, protein and mRNA lists via the Pathway Studio 11.3 Web interface using their Mammalian database. Differential expression (DE) profiling comparing the intestinal cell lines HT-29 and Caco-2 identified 1795 Genes, 168 Proteins and 160 miRNAs as DE between the two cell lines. At the gene level, 1084 genes were upregulated and 711 were downregulated in the Caco-2 cell line relative to the HT-29 cell line. At the protein level, 57 proteins were found to be upregulated and 111 downregulated in the Caco-2 cell line relative to the HT-29 cell line. Finally, at the miRNAs level, 104 were upregulated and 56 downregulated in the Caco-2 cell line relative to the HT-29 cell line. Gene ontology (GO) analysis of the DE mRNA identified cell adhesion, migration and ECM organization, cellular lipid

  17. DISRUPTIVE EVENT BIOSPHERE DOSE CONVERSION FACTOR ANALYSIS

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The Biosphere Model Report (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objective of this analysis was to develop the BDCFs for the volcanic

  18. A parallel analysis of hydrolithospheric beds geodata of Narzan mineral water Kislovodsk deposit

    Directory of Open Access Journals (Sweden)

    Д. А. Первухин

    2016-11-01

    Full Text Available The area of the Caucasus Mineral Waters – an environmental spa - occupies a special place among the other  spa regions of Russia due to richness, diversity, abundance and value of its mineral waters, landscape and climate conditions, and therapeutic muds. Lately the rate increased of developing its mineral water resources for both the local spa use and bottling for retail consumers. The growing number of mineral water bottling enterprises and sanatorium organizations affects significantly the amount of mineral water uptake. Irrational water uptake results in deterioration of underground water quality, change of its chemical composition and temperature. Expansion of the depression crater may eventually result in a collapse of seams roofing and vanishing of many water springs. It refers to all the waters underlying the area of Kavkazskie Mineralnye Vody. Due to that situation there is a potential threat of degradation of these deposits of mineral waters. Therefore, an important task consists in building forecast models of hydro-lithospheric processes in the region while the scope of water uptake changes in various parts of the deposit. it will be based on analyzing aerial photographs taken from board unmanned aerial vehicles. Currently such analysis is conducted using simple linear algorithms. The paper suggests to use the Nvidia CUDA technology for the purpose, adapting the mathematics used to ana- lyze aerial photographs to that technology. The initial data for processing were obtained by aerial photography in the course of remote sensing of the area by unmanned aerial vehicles belonging to OJSC «Narzan», Kislovodsk, an enterprise for mining mineral water. Presented in this paper have their Author’s Certificates issued by the Federal Institute of Industrial Property, the Russian Federation.

  19. Analysis of mineral phases in coal utilizing factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, P.K.

    1982-01-01

    The mineral phase inclusions of coal are discussed. The contribution of these to a coal sample are determined utilizing several techniques. Neutron activation analysis in conjunction with coal washability studies have produced some information on the general trends of elemental variation in the mineral phases. These results have been enhanced by the use of various statistical techniques. The target transformation factor analysis is specifically discussed and shown to be able to produce elemental profiles of the mineral phases in coal. A data set consisting of physically fractionated coal samples was generated. These samples were analyzed by neutron activation analysis and then their elemental concentrations examined using TTFA. Information concerning the mineral phases in coal can thus be acquired from factor analysis even with limited data. Additional data may permit the resolution of additional mineral phases as well as refinement of theose already identified

  20. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    An Gie Yong

    2013-10-01

    Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.

  1. LightForce Photon-Pressure Collision Avoidance: Updated Efficiency Analysis Utilizing a Highly Parallel Simulation Approach

    Science.gov (United States)

    Stupl, Jan; Faber, Nicolas; Foster, Cyrus; Yang, Fan Yang; Nelson, Bron; Aziz, Jonathan; Nuttall, Andrew; Henze, Chris; Levit, Creon

    2014-01-01

    This paper provides an updated efficiency analysis of the LightForce space debris collision avoidance scheme. LightForce aims to prevent collisions on warning by utilizing photon pressure from ground based, commercial off the shelf lasers. Past research has shown that a few ground-based systems consisting of 10 kilowatt class lasers directed by 1.5 meter telescopes with adaptive optics could lower the expected number of collisions in Low Earth Orbit (LEO) by an order of magnitude. Our simulation approach utilizes the entire Two Line Element (TLE) catalogue in LEO for a given day as initial input. Least-squares fitting of a TLE time series is used for an improved orbit estimate. We then calculate the probability of collision for all LEO objects in the catalogue for a time step of the simulation. The conjunctions that exceed a threshold probability of collision are then engaged by a simulated network of laser ground stations. After those engagements, the perturbed orbits are used to re-assess the probability of collision and evaluate the efficiency of the system. This paper describes new simulations with three updated aspects: 1) By utilizing a highly parallel simulation approach employing hundreds of processors, we have extended our analysis to a much broader dataset. The simulation time is extended to one year. 2) We analyze not only the efficiency of LightForce on conjunctions that naturally occur, but also take into account conjunctions caused by orbit perturbations due to LightForce engagements. 3) We use a new simulation approach that is regularly updating the LightForce engagement strategy, as it would be during actual operations. In this paper we present our simulation approach to parallelize the efficiency analysis, its computational performance and the resulting expected efficiency of the LightForce collision avoidance system. Results indicate that utilizing a network of four LightForce stations with 20 kilowatt lasers, 85% of all conjunctions with a

  2. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle

  3. Determining the Number of Factors in P-Technique Factor Analysis

    Science.gov (United States)

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  4. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-07-21

    This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in

  5. Disruptive Event Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in the biosphere. The biosphere process

  6. Exploratory Bi-Factor Analysis: The Oblique Case

    Science.gov (United States)

    Jennrich, Robert I.; Bentler, Peter M.

    2012-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…

  7. Exploratory Bi-factor Analysis: The Oblique Case

    OpenAIRE

    Jennrich, Robert L.; Bentler, Peter M.

    2011-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bi-factor rotation criterion designed to produce a rotated loading mat...

  8. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  9. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this

  10. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  11. Disruptive Event Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this analysis was to develop the BDCFs for the volcanic ash

  12. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  13. Massively parallel sequencing and genome-wide copy number analysis revealed a clonal relationship in benign metastasizing leiomyoma.

    Science.gov (United States)

    Wu, Ren-Chin; Chao, An-Shine; Lee, Li-Yu; Lin, Gigin; Chen, Shu-Jen; Lu, Yen-Jung; Huang, Huei-Jean; Yen, Chi-Feng; Han, Chien Min; Lee, Yun-Shien; Wang, Tzu-Hao; Chao, Angel

    2017-07-18

    Benign metastasizing leiomyoma (BML) is a rare disease entity typically presenting as multiple extrauterine leiomyomas associated with a uterine leiomyoma. It has been hypothesized that the extrauterine leiomyomata represent distant metastasis of the uterine leiomyoma. To date, the only molecular evidence supporting this hypothesis was derived from clonality analyses based on X-chromosome inactivation assays. Here, we sought to address this issue by examining paired specimens of synchronous pulmonary and uterine leiomyomata from three patients using targeted massively parallel sequencing and molecular inversion probe array analysis for detecting somatic mutations and copy number aberrations. We detected identical non-hot-spot somatic mutations and similar patterns of copy number aberrations (CNAs) in paired pulmonary and uterine leiomyomata from two patients, indicating the clonal relationship between pulmonary and uterine leiomyomata. In addition to loss of chromosome 22q found in the literature, we identified additional recurrent CNAs including losses of chromosome 3q and 11q. In conclusion, our findings of the clonal relationship between synchronous pulmonary and uterine leiomyomas support the hypothesis that BML represents a condition wherein a uterine leiomyoma disseminates to distant extrauterine locations.

  14. Massively parallel sequencing and genome-wide copy number analysis revealed a clonal relationship in benign metastasizing leiomyoma

    Science.gov (United States)

    Lee, Li-Yu; Lin, Gigin; Chen, Shu-Jen; Lu, Yen-Jung; Huang, Huei-Jean; Yen, Chi-Feng; Han, Chien Min; Lee, Yun-Shien; Wang, Tzu-Hao; Chao, Angel

    2017-01-01

    Benign metastasizing leiomyoma (BML) is a rare disease entity typically presenting as multiple extrauterine leiomyomas associated with a uterine leiomyoma. It has been hypothesized that the extrauterine leiomyomata represent distant metastasis of the uterine leiomyoma. To date, the only molecular evidence supporting this hypothesis was derived from clonality analyses based on X-chromosome inactivation assays. Here, we sought to address this issue by examining paired specimens of synchronous pulmonary and uterine leiomyomata from three patients using targeted massively parallel sequencing and molecular inversion probe array analysis for detecting somatic mutations and copy number aberrations. We detected identical non-hot-spot somatic mutations and similar patterns of copy number aberrations (CNAs) in paired pulmonary and uterine leiomyomata from two patients, indicating the clonal relationship between pulmonary and uterine leiomyomata. In addition to loss of chromosome 22q found in the literature, we identified additional recurrent CNAs including losses of chromosome 3q and 11q. In conclusion, our findings of the clonal relationship between synchronous pulmonary and uterine leiomyomas support the hypothesis that BML represents a condition wherein a uterine leiomyoma disseminates to distant extrauterine locations. PMID:28533481

  15. Pros and cons of rotating ground motion records to fault-normal/parallel directions for response history analysis of buildings

    Science.gov (United States)

    Kalkan, Erol; Kwong, Neal S.

    2014-01-01

    According to the regulatory building codes in the United States (e.g., 2010 California Building Code), at least two horizontal ground motion components are required for three-dimensional (3D) response history analysis (RHA) of building structures. For sites within 5 km of an active fault, these records should be rotated to fault-normal/fault-parallel (FN/FP) directions, and two RHAs should be performed separately (when FN and then FP are aligned with the transverse direction of the structural axes). It is assumed that this approach will lead to two sets of responses that envelope the range of possible responses over all nonredundant rotation angles. This assumption is examined here, for the first time, using a 3D computer model of a six-story reinforced-concrete instrumented building subjected to an ensemble of bidirectional near-fault ground motions. Peak values of engineering demand parameters (EDPs) were computed for rotation angles ranging from 0 through 180° to quantify the difference between peak values of EDPs over all rotation angles and those due to FN/FP direction rotated motions. It is demonstrated that rotating ground motions to FN/FP directions (1) does not always lead to the maximum responses over all angles, (2) does not always envelope the range of possible responses, and (3) does not provide maximum responses for all EDPs simultaneously even if it provides a maximum response for a specific EDP.

  16. Finite mixture model applied in the analysis of a turbulent bistable flow on two parallel circular cylinders

    Energy Technology Data Exchange (ETDEWEB)

    Paula, A.V. de, E-mail: vagtinski@mecanica.ufrgs.br [PROMEC – Programa de Pós Graduação em Engenharia Mecânica, UFRGS – Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil); Möller, S.V., E-mail: svmoller@ufrgs.br [PROMEC – Programa de Pós Graduação em Engenharia Mecânica, UFRGS – Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil)

    2013-11-15

    This paper presents a study of the bistable phenomenon which occurs in the turbulent flow impinging on circular cylinders placed side-by-side. Time series of axial and transversal velocity obtained with the constant temperature hot wire anemometry technique in an aerodynamic channel are used as input data in a finite mixture model, to classify the observed data according to a family of probability density functions. Wavelet transforms are applied to analyze the unsteady turbulent signals. Results of flow visualization show that the flow is predominantly two-dimensional. A double-well energy model is suggested to describe the behavior of the bistable phenomenon in this case. -- Highlights: ► Bistable flow on two parallel cylinders is studied with hot wire anemometry as a first step for the application on the analysis to tube bank flow. ► The method of maximum likelihood estimation is applied to hot wire experimental series to classify the data according to PDF functions in a mixture model approach. ► Results show no evident correlation between the changes of flow modes with time. ► An energy model suggests the presence of more than two flow modes.

  17. Prediction of the single-phase turbulent mixing rate between two parallel subchannels using a subchannel geometry factor

    International Nuclear Information System (INIS)

    Sadatomi, M.; Kawahara, A.; Sato, Y.

    1996-01-01

    This paper presents a simple method for predicting the single-phase turbulent mixing rate between adjacent subchannels in nuclear fuel bundles. In this method, the mixing rate is computed as the sum of the two components of turbulent diffusion and convective transfer. Of these, the turbulent diffusion component is calculated using a newly defined subchannel geometry factor F* and the mean turbulent diffusivity for each subchannel which is computed from Elder's equation. The convective transfer component is evaluated from a mixing Stanton number correlation obtained empirically in this study. In order to confirm the validity of the proposed method, experimental data on turbulent mixing rate were obtained using a tracer technique under adiabatic conditions with three test channels, each consisting of two subchannels. The range of Reynolds number covered was 5000-66 000. From comparisons of the predicted turbulent mixing rates with the experimental data of other investigators as well as the authors, it has been confirmed that the proposed method can predict the data in a range of gap clearance to rod diameter ratio of 0.02-0.4 within about ±25% for square array bundles and about ±35% for triangular array bundles. (orig.)

  18. Comparative analysis of the serial/parallel numerical calculation of boiling channels thermohydraulics; Analisis comparativo del calculo numerico serie/paralelo de la termohidraulica de canales con ebullicion

    Energy Technology Data Exchange (ETDEWEB)

    Cecenas F, M., E-mail: mcf@iie.org.mx [Instituto Nacional de Electricidad y Energias Limpias, Reforma 113, Col. Palmira, 62490 Cuernavaca, Morelos (Mexico)

    2017-09-15

    A parallel channel model with boiling and punctual neutron kinetics is used to compare the implementation of its programming in C language through a conventional scheme and through a parallel programming scheme. In both cases the subroutines written in C are practically the same, but they vary in the way of controlling the execution of the tasks that calculate the different channels. Parallel Virtual Machine is used for the parallel solution, which allows the passage of messages between tasks to control convergence and transfer the variables of interest between the tasks that run simultaneously on a platform equipped with a multi-core microprocessor. For some problems defined as a study case, such as the one presented in this paper, a computer with two cores can reduce the computation time to 54-56% of the time required by the same program in its conventional sequential version. Similarly, a processor with four cores can reduce the time to 22-33% of execution time of the conventional serial version. These results of substantially reducing the computation time are very motivating of all those applications that can be prepared to be parallelized and whose execution time is an important factor. (Author)

  19. Analysis of the contribution of sedimentation to bacterial mass transport in a parallel plate flow chamber Part II : Use of fluorescence imaging

    NARCIS (Netherlands)

    Li, Jiuyi; Busscher, Henk J.; van der Mei, Henny C.; Norde, Willem; Krom, Bastiaan P.; Sjollema, Jelmer

    2011-01-01

    Using a new phase-contrast microscopy-based method of analysis, sedimentation has recently been demonstrated to be the major mass transport mechanism of bacteria towards substratum surfaces in a parallel plate flow chamber (J. Li, H.J. Busscher, W. Norde, J. Sjollema, Colloid Surf. B. 84 (2011)76).

  20. My-Forensic-Loci-queries (MyFLq) framework for analysis of forensic STR data generated by massive parallel sequencing.

    Science.gov (United States)

    Van Neste, Christophe; Vandewoestyne, Mado; Van Criekinge, Wim; Deforce, Dieter; Van Nieuwerburgh, Filip

    2014-03-01

    Forensic scientists are currently investigating how to transition from capillary electrophoresis (CE) to massive parallel sequencing (MPS) for analysis of forensic DNA profiles. MPS offers several advantages over CE such as virtually unlimited multiplexy of loci, combining both short tandem repeat (STR) and single nucleotide polymorphism (SNP) loci, small amplicons without constraints of size separation, more discrimination power, deep mixture resolution and sample multiplexing. We present our bioinformatic framework My-Forensic-Loci-queries (MyFLq) for analysis of MPS forensic data. For allele calling, the framework uses a MySQL reference allele database with automatically determined regions of interest (ROIs) by a generic maximal flanking algorithm which makes it possible to use any STR or SNP forensic locus. Python scripts were designed to automatically make allele calls starting from raw MPS data. We also present a method to assess the usefulness and overall performance of a forensic locus with respect to MPS, as well as methods to estimate whether an unknown allele, which sequence is not present in the MySQL database, is in fact a new allele or a sequencing error. The MyFLq framework was applied to an Illumina MiSeq dataset of a forensic Illumina amplicon library, generated from multilocus STR polymerase chain reaction (PCR) on both single contributor samples and multiple person DNA mixtures. Although the multilocus PCR was not yet optimized for MPS in terms of amplicon length or locus selection, the results show excellent results for most loci. The results show a high signal-to-noise ratio, correct allele calls, and a low limit of detection for minor DNA contributors in mixed DNA samples. Technically, forensic MPS affords great promise for routine implementation in forensic genomics. The method is also applicable to adjacent disciplines such as molecular autopsy in legal medicine and in mitochondrial DNA research. Copyright © 2013 The Authors. Published by

  1. Parallelization of the unstructured Navier-stoke solver LILAC for the aero-thermal analysis of a gas-cooled reactor

    International Nuclear Information System (INIS)

    Kim, J. T.; Kim, S. B.; Lee, W. J.

    2004-01-01

    Currently lilac code is under development to analyse thermo-hydraulics of the gas-cooled reactor(GCR) especially high-temperature GCR which is one of the gen IV nuclear reactors. The lilac code was originally developed for the analysis of thermo-hydraulics in a molten pool. And now it is modified to resolve the compressible gas flows in the GCR. The more complexities in the internal flow geometries of the GCR reactor and aero-thermal flows, the number of computational cells are increased and finally exceeds the current computing powers of the desktop computers. To overcome the problem and well resolve the interesting physics in the GCR it is conducted to parallels the lilac code by the decomposition of a computational domain or grid. Some benchmark problems are solved with the parallelized lilac code and its speed-up characteristics by the parallel computation is evaluated and described in the article

  2. Parallel single-cell analysis of active caspase-3/7 in apoptotic and non-apoptotic cells

    Czech Academy of Sciences Publication Activity Database

    Ledvina, Vojtěch; Janečková, Eva; Matalová, Eva; Klepárník, Karel

    2017-01-01

    Roč. 409, č. 1 (2017), s. 269-274 ISSN 1618-2642 R&D Projects: GA ČR(CZ) GA14-28254S Institutional support: RVO:68081715 ; RVO:67985904 Keywords : single-cell analysis * bioluminescence * apoptosis * caspase-3/7 Subject RIV: CB - Analytical Chemistry , Separation; EB - Genetics ; Molecular Biology (UZFG-Y) OBOR OECD: Analytical chemistry ; Developmental biology (UZFG-Y) Impact factor: 3.431, year: 2016

  3. Parallel single-cell analysis of active caspase-3/7 in apoptotic and non-apoptotic cells

    Czech Academy of Sciences Publication Activity Database

    Ledvina, Vojtěch; Janečková, Eva; Matalová, Eva; Klepárník, Karel

    2017-01-01

    Roč. 409, č. 1 (2017), s. 269-274 ISSN 1618-2642 R&D Projects: GA ČR(CZ) GA14-28254S Institutional support: RVO:68081715 ; RVO:67985904 Keywords : single-cell analysis * bioluminescence * apoptosis * caspase-3/7 Subject RIV: CB - Analytical Chemistry, Separation; EB - Genetics ; Molecular Biology (UZFG-Y) OBOR OECD: Analytical chemistry; Developmental biology (UZFG-Y) Impact factor: 3.431, year: 2016

  4. Flynn Effects on Sub-Factors of Episodic and Semantic Memory: Parallel Gains over Time and the Same Set of Determining Factors

    Science.gov (United States)

    Ronnlund, Michael; Nilsson, Lars-Goran.

    2009-01-01

    The study examined the extent to which time-related gains in cognitive performance, so-called Flynn effects, generalize across sub-factors of episodic memory (recall and recognition) and semantic memory (knowledge and fluency). We conducted time-sequential analyses of data drawn from the Betula prospective cohort study, involving four age-matched…

  5. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  6. Experimental Study and steady state stability analysis of CLL-T Series Parallel Resonant Converter with Fuzzy controller using State Space Analysis

    Directory of Open Access Journals (Sweden)

    C. Nagarajan

    2012-09-01

    Full Text Available This paper presents a Closed Loop CLL-T (capacitor inductor inductor Series Parallel Resonant Converter (SPRC has been simulated and the performance is analysised. A three element CLL-T SPRC working under load independent operation (voltage type and current type load is presented in this paper. The Steady state Stability Analysis of CLL-T SPRC has been developed using State Space technique and the regulation of output voltage is done by using Fuzzy controller. The simulation study indicates the superiority of fuzzy control over the conventional control methods. The proposed approach is expected to provide better voltage regulation for dynamic load conditions. A prototype 300 W, 100 kHz converter is designed and built to experimentally demonstrate, dynamic and steady state performance for the CLL-T SPRC are compared from the simulation studies.

  7. Deep evolutionary comparison of gene expression identifies parallel recruitment of trans-factors in two independent origins of C4 photosynthesis.

    Directory of Open Access Journals (Sweden)

    Sylvain Aubry

    2014-06-01

    Full Text Available With at least 60 independent origins spanning monocotyledons and dicotyledons, the C4 photosynthetic pathway represents one of the most remarkable examples of convergent evolution. The recurrent evolution of this highly complex trait involving alterations to leaf anatomy, cell biology and biochemistry allows an increase in productivity by ∼ 50% in tropical and subtropical areas. The extent to which separate lineages of C4 plants use the same genetic networks to maintain C4 photosynthesis is unknown. We developed a new informatics framework to enable deep evolutionary comparison of gene expression in species lacking reference genomes. We exploited this to compare gene expression in species representing two independent C4 lineages (Cleome gynandra and Zea mays whose last common ancestor diverged ∼ 140 million years ago. We define a cohort of 3,335 genes that represent conserved components of leaf and photosynthetic development in these species. Furthermore, we show that genes encoding proteins of the C4 cycle are recruited into networks defined by photosynthesis-related genes. Despite the wide evolutionary separation and independent origins of the C4 phenotype, we report that these species use homologous transcription factors to both induce C4 photosynthesis and to maintain the cell specific gene expression required for the pathway to operate. We define a core molecular signature associated with leaf and photosynthetic maturation that is likely shared by angiosperm species derived from the last common ancestor of the monocotyledons and dicotyledons. We show that deep evolutionary comparisons of gene expression can reveal novel insight into the molecular convergence of highly complex phenotypes and that parallel evolution of trans-factors underpins the repeated appearance of C4 photosynthesis. Thus, exploitation of extant natural variation associated with complex traits can be used to identify regulators. Moreover, the transcription factors

  8. Deep evolutionary comparison of gene expression identifies parallel recruitment of trans-factors in two independent origins of C4 photosynthesis.

    Science.gov (United States)

    Aubry, Sylvain; Kelly, Steven; Kümpers, Britta M C; Smith-Unna, Richard D; Hibberd, Julian M

    2014-06-01

    With at least 60 independent origins spanning monocotyledons and dicotyledons, the C4 photosynthetic pathway represents one of the most remarkable examples of convergent evolution. The recurrent evolution of this highly complex trait involving alterations to leaf anatomy, cell biology and biochemistry allows an increase in productivity by ∼ 50% in tropical and subtropical areas. The extent to which separate lineages of C4 plants use the same genetic networks to maintain C4 photosynthesis is unknown. We developed a new informatics framework to enable deep evolutionary comparison of gene expression in species lacking reference genomes. We exploited this to compare gene expression in species representing two independent C4 lineages (Cleome gynandra and Zea mays) whose last common ancestor diverged ∼ 140 million years ago. We define a cohort of 3,335 genes that represent conserved components of leaf and photosynthetic development in these species. Furthermore, we show that genes encoding proteins of the C4 cycle are recruited into networks defined by photosynthesis-related genes. Despite the wide evolutionary separation and independent origins of the C4 phenotype, we report that these species use homologous transcription factors to both induce C4 photosynthesis and to maintain the cell specific gene expression required for the pathway to operate. We define a core molecular signature associated with leaf and photosynthetic maturation that is likely shared by angiosperm species derived from the last common ancestor of the monocotyledons and dicotyledons. We show that deep evolutionary comparisons of gene expression can reveal novel insight into the molecular convergence of highly complex phenotypes and that parallel evolution of trans-factors underpins the repeated appearance of C4 photosynthesis. Thus, exploitation of extant natural variation associated with complex traits can be used to identify regulators. Moreover, the transcription factors that are shared by

  9. Time Series Factor Analysis with an Application to Measuring Money

    NARCIS (Netherlands)

    Gilbert, Paul D.; Meijer, Erik

    2005-01-01

    Time series factor analysis (TSFA) and its associated statistical theory is developed. Unlike dynamic factor analysis (DFA), TSFA obviates the need for explicitly modeling the process dynamics of the underlying phenomena. It also differs from standard factor analysis (FA) in important respects: the

  10. Factors related to the parallel use of complementary and alternative medicine with conventional medicine among patients with chronic conditions in South Korea

    Directory of Open Access Journals (Sweden)

    Byunghee Choi

    2017-06-01

    Conclusion: In the rural area of Korea, most inpatients who used CM for the management of chronic conditions used CAM in parallel. KM was the most common in CAM modalities, and the aspect of parallel use varied according to the disease conditions.

  11. Using exploratory factor analysis in personality research: Best-practice recommendations

    Directory of Open Access Journals (Sweden)

    Sumaya Laher

    2010-11-01

    Research purpose: This article presents more objective methods to determine the number of factors, most notably parallel analysis and Velicer’s minimum average partial (MAP. The benefits of rotation are also discussed. The article argues for more consistent use of Procrustes rotation and congruence coefficients in factor analytic studies. Motivation for the study: Exploratory factor analysis is often criticised for not being rigorous and objective enough in terms of the methods used to determine the number of factors, the rotations to be used and ultimately the validity of the factor structure. Research design, approach and method: The article adopts a theoretical stance to discuss the best-practice recommendations for factor analytic research in the field of psychology. Following this, an example located within personality assessment and using the NEO-PI-R specifically is presented. A total of 425 students at the University of the Witwatersrand completed the NEO-PI-R. These responses were subjected to a principal components analysis using varimax rotation. The rotated solution was subjected to a Procrustes rotation with Costa and McCrae’s (1992 matrix as the target matrix. Congruence coefficients were also computed. Main findings: The example indicates the use of the methods recommended in the article and demonstrates an objective way of determining the number of factors. It also provides an example of Procrustes rotation with coefficients of agreement as an indication of how factor analytic results may be presented more rigorously in local research. Practical/managerial implications: It is hoped that the recommendations in this article will have best-practice implications for both researchers and practitioners in the field who employ factor analysis regularly. Contribution/value-add: This article will prove useful to all researchers employing factor analysis and has the potential to set the trend for better use of factor analysis in the South African context.

  12. Continuous fraction collection of gas chromatographic separations with parallel mass spectrometric detection applied to cell-based bioactivity analysis

    NARCIS (Netherlands)

    Jonker, Willem; Zwart, Nick; Stockl, Jan B.; de Koning, Sjaak; Schaap, Jaap; Lamoree, Marja H.; Somsen, Govert W.; Hamers, Timo; Kool, Jeroen

    2017-01-01

    We describe the development and evaluation of a GC-MS fractionation platform that combines high-resolution fraction collection of full chromatograms with parallel MS detection. A y-split at the column divides the effluent towards the MS detector and towards an inverted y-piece where vaporized trap

  13. Analysis of the pool critical assembly benchmark using raptor-M3G, a parallel deterministic radiation transport code - 289

    International Nuclear Information System (INIS)

    Fischer, G.A.

    2010-01-01

    The PCA Benchmark is analyzed using RAPTOR-M3G, a parallel SN radiation transport code. A variety of mesh structures, angular quadrature sets, cross section treatments, and reactor dosimetry cross sections are presented. The results show that RAPTOR-M3G is generally suitable for PWR neutron dosimetry applications. (authors)

  14. Electromagnetic ion-cyclotron instability in the presence of a parallel electric field with general loss-cone distribution function - particle aspect analysis

    Directory of Open Access Journals (Sweden)

    G. Ahirwar

    2006-08-01

    Full Text Available The effect of parallel electric field on the growth rate, parallel and perpendicular resonant energy and marginal stability of the electromagnetic ion-cyclotron (EMIC wave with general loss-cone distribution function in a low β homogeneous plasma is investigated by particle aspect approach. The effect of the steepness of the loss-cone distribution is investigated on the electromagnetic ion-cyclotron wave. The whole plasma is considered to consist of resonant and non-resonant particles. It is assumed that resonant particles participate in the energy exchange with the wave, whereas non-resonant particles support the oscillatory motion of the wave. The wave is assumed to propagate parallel to the static magnetic field. The effect of the parallel electric field with the general distribution function is to control the growth rate of the EMIC waves, whereas the effect of steep loss-cone distribution is to enhance the growth rate and perpendicular heating of the ions. This study is relevant to the analysis of ion conics in the presence of an EMIC wave in the auroral acceleration region of the Earth's magnetoplasma.

  15. Housing price forecastability: A factor analysis

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    of the model stays high at longer horizons. The estimated factors are strongly statistically signi…cant according to a bootstrap resampling method which takes into account that the factors are estimated regressors. The simple three-factor model also contains substantial out-of-sample predictive power...

  16. Erikson Psychosocial Stage Inventory: A Factor Analysis

    Science.gov (United States)

    Gray, Mary McPhail; And Others

    1986-01-01

    The 72-item Erikson Psychosocial Stage Inventory (EPSI) was factor analyzed for a group of 534 university freshmen and sophomore students. Seven factors emerged, which were labeled Initiative, Industry, Identity, Friendship, Dating, Goal Clarity, and Self-Confidence. Item's representing Erikson's factors, Trust and Autonomy, were dispersed across…

  17. Operability probabilistic analysis: methodology for economic improvement through the parallelization of process plants; Analisis probabilistico de operatividad: metodologia para mejora economica a traves de la paralelizacion de plantas de proceso

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, A.; Francois, J. L.; Martin del Campo, C.; Nelson, P. F., E-mail: iqalexmdz@yahoo.com.mx [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Paseo Cuauhnahuac No. 8532, Col. Progreso, 62550 Jiutepec, Morelos (Mexico)

    2012-10-15

    One of the major challenges of the emergent technologies to overcome is the economic competitive with regard to the established technologies ar the present time, since these should not only take advantage efficiently of the energy resources and the raw materials in their productive processes, but also to elevate to the maximum the use of the derived economic resources of the initial investment of the plant. In special cases, like in those related with the electric power generation or fuels, the fixed cost represents a high percentage of the total cost, where is observed a great dependence with the plant factor, parameter that in turn is susceptible to non prospective variations but yes predictable by means of the use of analytic tools, able to relate the failures rates of present elements in the plant with the probability of operation outside times, as the Operability Probabilistic Analysis. In this study were evaluated the implications of changes in the plant configurations, with the purpose of knowing the economic advantages of a major or minor equipment s division in parallel (parallelization); the function general objective is established to evaluate the parallelization alternatives and the basic concepts are presented to carry out this methodology. At the end a study case is developed for a hydrogen production plant in its section of sulfuric acid decomposition. (Author)

  18. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.; Rauchwerger, Lawrence

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  19. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  20. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    Wasiolek, M.

    2000-01-01

    The purpose of this report was to document the process leading to development of the Biosphere Dose Conversion Factors (BDCFs) for the postclosure nominal performance of the potential repository at Yucca Mountain. BDCF calculations concerned twenty-four radionuclides. This selection included sixteen radionuclides that may be significant nominal performance dose contributors during the compliance period of up to 10,000 years, five additional radionuclides of importance for up to 1 million years postclosure, and three relatively short-lived radionuclides important for the human intrusion scenario. Consideration of radionuclide buildup in soil caused by previous irrigation with contaminated groundwater was taken into account in the BDCF development. The effect of climate evolution, from the current arid conditions to a wetter and cooler climate, on the BDCF values was evaluated. The analysis included consideration of different exposure pathway's contribution to the BDCFs. Calculations of nominal performance BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. BDCFs for the nominal performance, when combined with the concentrations of radionuclides in groundwater allow calculation of potential radiation doses to the receptor of interest. Calculated estimates of radionuclide concentration in groundwater result from the saturated zone modeling. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA) to calculate doses to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain

  1. Disruptive Event Biosphere Doser Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2000-12-28

    The purpose of this report was to document the process leading to, and the results of, development of radionuclide-, exposure scenario-, and ash thickness-specific Biosphere Dose Conversion Factors (BDCFs) for the postulated postclosure extrusive igneous event (volcanic eruption) at Yucca Mountain. BDCF calculations were done for seventeen radionuclides. The selection of radionuclides included those that may be significant dose contributors during the compliance period of up to 10,000 years, as well as radionuclides of importance for up to 1 million years postclosure. The approach documented in this report takes into account human exposure during three different phases at the time of, and after, volcanic eruption. Calculations of disruptive event BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. The pathway analysis included consideration of different exposure pathway's contribution to the BDCFs. BDCFs for volcanic eruption, when combined with the concentration of radioactivity deposited by eruption on the soil surface, allow calculation of potential radiation doses to the receptor of interest. Calculation of radioactivity deposition is outside the scope of this report and so is the transport of contaminated ash from the volcano to the location of the receptor. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA), in which doses are calculated to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.

  2. Theoretical analysis on ac loss properties of two-strand parallel conductors composed of superconducting multifilamentary strands

    CERN Document Server

    Iwakuma, M; Funaki, K

    2002-01-01

    The ac loss properties of two-strand parallel conductors composed of superconducting multifilamentary strands were theoretically investigated. The constituent strands generally need to be insulated and transposed for the sake of uniform current distribution and low ac loss. In case the transposition points deviate from the optimum ones, shielding current is induced according to the interlinkage magnetic flux of the twisted loop enclosed by the insulated strands and the contact resistances at the terminals. It produces an additional ac loss. Supposing a simple situation where a two-strand parallel conductor with one-point transposition is exposed to a uniform ac magnetic field, the basic equations for the magnetic field were proposed and the theoretical expressions of the additional ac losses derived. As a result, the following features were shown. The additional ac loss in the non-saturation case, where the induced shielding current is less than the critical current of a strand, is proportional to the square ...

  3. A Visual Database System for Image Analysis on Parallel Computers and its Application to the EOS Amazon Project

    Science.gov (United States)

    Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.

    1996-01-01

    The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.

  4. Analysis of Relative Parallelism Between Hamular-Incisive-Papilla Plane and Campers Plane in Edentulous Subjects: A Comparative Study.

    Science.gov (United States)

    Tambake, Deepti; Shetty, Shilpa; Satish Babu, C L; Fulari, Sangamesh G

    2014-12-01

    The study was undertaken to evaluate the parallelism between hamular-incisive-papilla plane (HIP) and the Campers plane. And to determine which part of the posterior reference of the tragus i.e., the superior, middle or the inferior of the Camper's plane is parallel to HIP using digital lateral cephalograms. Fifty edentulous subjects with well formed ridges were selected for the study. The master casts were obtained using the standard selective pressure impression procedure. On the deepest point of the hamular notches and the centre of the incisive papilla stainless steel spherical bearings were glued to the cast at the marked points. The study templates were fabricated with autopolymerizing acrylic resin. The subjects were prepared for the lateral cephalograms. Stainless steel spherical bearings were adhered to the superior, middle, inferior points of the tragus of the ear and inferior border of the ala of the nose using surgical adhesive tape. The subjects with study templates were subjected to lateral cephalograms. Cephalometric tracings were done using Autocad 2010 software. Lines were drawn connecting the incisive papilla and hamular notch and the stainless steel spherical bearings placed on the superior, middle and inferior points on the tragus and the ala of the nose i.e., the Campers line S, Campers line M, Campers line I. The angles between the three Camper's line and the HIP were measured and recorded. Higher mean angulation was recorded in Campers line S -HIP (8.03) followed by Campers line M-HIP (4.60). Campers line I-HIP recorded the least angulation (3.80). The HIP is parallel to the Camper's plane. The Camper's plane formed with the posterior reference point as inferior point of the tragus is relatively parallel to the HIP.

  5. POU4F3 mutation screening in Japanese hearing loss patients: Massively parallel DNA sequencing-based analysis identified novel variants associated with autosomal dominant hearing loss.

    Directory of Open Access Journals (Sweden)

    Tomohiro Kitano

    Full Text Available A variant in a transcription factor gene, POU4F3, is responsible for autosomal dominant nonsyndromic hereditary hearing loss, DFNA15. To date, 14 variants, including a whole deletion of POU4F3, have been reported to cause HL in various ethnic groups. In the present study, genetic screening for POU4F3 variants was carried out for a large series of Japanese hearing loss (HL patients to clarify the prevalence and clinical characteristics of DFNA15 in the Japanese population. Massively parallel DNA sequencing of 68 target candidate genes was utilized in 2,549 unrelated Japanese HL patients (probands to identify genomic variations responsible for HL. The detailed clinical features in patients with POU4F3 variants were collected from medical charts and analyzed. Novel 12 POU4F3 likely pathogenic variants (six missense variants, three frameshift variants, and three nonsense variants were successfully identified in 15 probands (2.5% among 602 families exhibiting autosomal dominant HL, whereas no variants were detected in the other 1,947 probands with autosomal recessive or inheritance pattern unknown HL. To obtain the audiovestibular configuration of the patients harboring POU4F3 variants, we collected audiograms and vestibular symptoms of the probands and their affected family members. Audiovestibular phenotypes in a total of 24 individuals from the 15 families possessing variants were characterized by progressive HL, with a large variation in the onset age and severity with or without vestibular symptoms observed. Pure-tone audiograms indicated the most prevalent configuration as mid-frequency HL type followed by high-frequency HL type, with asymmetry observed in approximately 20% of affected individuals. Analysis of the relationship between age and pure-tone average suggested that individuals with truncating variants showed earlier onset and slower progression of HL than did those with non-truncating variants. The present study showed that variants

  6. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  7. A Bayesian Nonparametric Approach to Factor Analysis

    DEFF Research Database (Denmark)

    Piatek, Rémi; Papaspiliopoulos, Omiros

    2018-01-01

    This paper introduces a new approach for the inference of non-Gaussian factor models based on Bayesian nonparametric methods. It relaxes the usual normality assumption on the latent factors, widely used in practice, which is too restrictive in many settings. Our approach, on the contrary, does no...

  8. Classification analysis of organization factors related to system safety

    International Nuclear Information System (INIS)

    Liu Huizhen; Zhang Li; Zhang Yuling; Guan Shihua

    2009-01-01

    This paper analyzes the different types of organization factors which influence the system safety. The organization factor can be divided into the interior organization factor and exterior organization factor. The latter includes the factors of political, economical, technical, law, social culture and geographical, and the relationships among different interest groups. The former includes organization culture, communication, decision, training, process, supervision and management and organization structure. This paper focuses on the description of the organization factors. The classification analysis of the organization factors is the early work of quantitative analysis. (authors)

  9. Using BMDP and SPSS for a Q factor analysis.

    Science.gov (United States)

    Tanner, B A; Koning, S M

    1980-12-01

    While Euclidean distances and Q factor analysis may sometimes be preferred to correlation coefficients and cluster analysis for developing a typology, commercially available software does not always facilitate their use. Commands are provided for using BMDP and SPSS in a Q factor analysis with Euclidean distances.

  10. EXPLORATORY FACTOR ANALYSIS (EFA IN CONSUMER BEHAVIOR AND MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    Marcos Pascual Soler

    2012-06-01

    Full Text Available Exploratory Factor Analysis (EFA is one of the most widely used statistical procedures in social research. The main objective of this work is to describe the most common practices used by researchers in the consumer behavior and marketing area. Through a literature review methodology the practices of AFE in five consumer behavior and marketing journals(2000-2010 were analyzed. Then, the choices made by the researchers concerning factor model, retention criteria, rotation, factors interpretation and other relevant issues to factor analysis were analized. The results suggest that researchers routinely conduct analyses using such questionable methods. Suggestions for improving the use of factor analysis and the reporting of results are presented and a checklist (Exploratory Factor Analysis Checklist, EFAC is provided to help editors, reviewers, and authors improve reporting exploratory factor analysis.

  11. Factor analysis of serogroups botanica and aurisina of Leptospira biflexa.

    Science.gov (United States)

    Cinco, M

    1977-11-01

    Factor analysis is performed on serovars of Botanica and Aurisina serogroup of Leptospira biflexa. The results show the arrangement of main factors serovar and serogroup specific, as well as the antigens common with serovars of heterologous serogroups.

  12. Human factors analysis of incident/accident report

    International Nuclear Information System (INIS)

    Kuroda, Isao

    1992-01-01

    Human factors analysis of accident/incident has different kinds of difficulties in not only technical, but also psychosocial background. This report introduces some experiments of 'Variation diagram method' which is able to extend to operational and managemental factors. (author)

  13. Nonparametric factor analysis of time series

    OpenAIRE

    Rodríguez-Poo, Juan M.; Linton, Oliver Bruce

    1998-01-01

    We introduce a nonparametric smoothing procedure for nonparametric factor analaysis of multivariate time series. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel.

  14. Analysis of success factors in advertising

    OpenAIRE

    Fedorchak, Oleksiy; Kedebecz, Kristina

    2017-01-01

    The essence of factors of the success of advertising campaigns is investigated. The stages of conducting and stages of evaluation of the effectiveness of advertising campaigns are determined. Also defined goals and objectives of advertising campaigns.

  15. Holographic analysis of diffraction structure factors

    International Nuclear Information System (INIS)

    Marchesini, S.; Bucher, J.J.; Shuh, D.K.; Fabris, L.; Press, M.J.; West, M.W.; Hussain, Z.; Mannella, N.; Fadley, C.S.; Van Hove, M.A.; Stolte, W.C.

    2002-01-01

    We combine the theory of inside-source/inside-detector x-ray fluorescence holography and Kossel lines/ x ray standing waves in kinematic approximation to directly obtain the phases of the diffraction structure factors. The influence of Kossel lines and standing waves on holography is also discussed. We obtain partial phase determination from experimental data obtaining the sign of the real part of the structure factor for several reciprocal lattice vectors of a vanadium crystal

  16. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  17. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  18. Isotropic damage model and serial/parallel mix theory applied to nonlinear analysis of ferrocement thin walls. Experimental and numerical analysis

    Directory of Open Access Journals (Sweden)

    Jairo A. Paredes

    2016-01-01

    Full Text Available Ferrocement thin walls are the structural elements that comprise the earthquake resistant system of dwellings built with this material. This article presents the results drawn from an experimental campaign carried out over full-scale precast ferrocement thin walls that were assessed under lateral static loading conditions. The tests allowed the identification of structural parameters and the evaluation of the performance of the walls under static loading conditions. Additionally, an isotropic damage model for modelling the mortar was applied, as well as the classic elasto-plastic theory for modelling the meshes and reinforcing bars. The ferrocement is considered as a composite material, thus the serial/parallel mix theory is used for modelling its mechanical behavior. In this work a methodology for the numerical analysis that allows modeling the nonlinear behavior exhibited by ferrocement walls under static loading conditions, as well as their potential use in earthquake resistant design, is proposed.

  19. Identification of noise in linear data sets by factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, Ph.K.

    1982-01-01

    A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors. (author)

  20. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  1. Analysis of Different Series-Parallel Connection Modules for Dye-Sensitized Solar Cell by Electrochemical Impedance Spectroscopy

    Directory of Open Access Journals (Sweden)

    Jung-Chuan Chou

    2016-01-01

    Full Text Available The internal impedances of different dye-sensitized solar cell (DSSC models were analyzed by electrochemical impedance spectrometer (EIS with an equivalent circuit model. The Nyquist plot was built to simulate the redox reaction of internal device at the heterojunction. It was useful to analyze the component structure and promote photovoltaic conversion efficiency of DSSC. The impedance of DSSC was investigated and the externally connected module assembly was constructed utilizing single cells on the scaled-up module. According to the experiment results, the impedance was increased with increasing cells connected in series. On the contrary, the impedance was decreased with increasing cells connected in parallel.

  2. Efficiency Analysis of the access method with the cascading Bloom filter to the data warehouse on the parallel computing platform

    Science.gov (United States)

    Grigoriev, Yu A.; Proletarskaya, V. A.; Ermakov, E. Yu; Ermakov, O. Yu

    2017-10-01

    A new method was developed with a cascading Bloom filter (CBF) for executing SQL queries in the Apache Spark parallel computing environment. It includes the representation of the original query in the form of several subqueries, the development of a connection graph and the transformation of subqueries, the definition of connections where it is necessary to use Bloom filters, the representation of the graph in terms of Spark. On the example of the query Q3 of the TPC-H test, full-scale experiments were carried out, which confirmed the effectiveness of the developed method.

  3. Small-Signal Modeling, Analysis and Testing of Parallel Three-Phase-Inverters with A Novel Autonomous Current Sharing Controller

    DEFF Research Database (Denmark)

    Guan, Yajuan; Quintero, Juan Carlos Vasquez; Guerrero, Josep M.

    2015-01-01

    A novel simple and effective autonomous currentsharing controller for parallel three-phase inverters is employed in this paper. The novel controller is able to endow to the system high speed response and precision in contrast to the conventional droop control as it does not require calculating any...... active or reactive power, instead it uses a virtual impedance loop and a SFR phase-locked loop. The small-signal model of the system was developed for the autonomous operation of inverter-based microgrid with the proposed controller. The developed model shows large stability margin and fast transient...

  4. Design Analysis and Dynamic Modeling of a High-Speed 3T1R Pick-and-Place Parallel Robot

    DEFF Research Database (Denmark)

    Wu, Guanglei; Bai, Shaoping; Hjørnet, Preben

    2015-01-01

    This paper introduces a four degree-of-freedom parallel robot producing three translation and one rotation (Schönflies motion). This robot can generate a rectangular workspace that is close to the applicable work envelope and suitable for pick-and-place operations. The kinematics of the robot...... is studied to analyze the workspace and the isocontours of the local dexterity over the representative regular workspace are visualized. The simplified dynamics is modeled and compared with Adams model to show its effectiveness....

  5. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  6. Combining analysis of variance and three‐way factor analysis methods for studying additive and multiplicative effects in sensory panel data

    DEFF Research Database (Denmark)

    Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun

    2015-01-01

    Data from descriptive sensory analysis are essentially three‐way data with assessors, samples and attributes as the three ways in the data set. Because of this, there are several ways that the data can be analysed. The paper focuses on the analysis of sensory characteristics of products while...... in the use of the scale with reference to the existing structure of relationships between sensory descriptors. The multivariate assessor model will be tested on a data set from milk. Relations between the proposed model and other multiplicative models like parallel factor analysis and analysis of variance...

  7. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  8. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  9. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  10. Analysis of Increased Information Technology Outsourcing Factors

    Directory of Open Access Journals (Sweden)

    Brcar Franc

    2013-01-01

    Full Text Available The study explores the field of IT outsourcing. The narrow field of research is to build a model of IT outsourcing based on influential factors. The purpose of this research is to determine the influential factors on IT outsourcing expansion. A survey was conducted with 141 large-sized Slovenian companies. Data were statistically analyzed using binary logistic regression. The final model contains five factors: (1 management’s support; (2 knowledge on IT outsourcing; (3 improvement of efficiency and effectiveness; (4 quality improvement of IT services; and (5 innovation improvement of IT. Managers immediately can use the results of this research in their decision-making. Increased performance of each individual organization is to the benefit of the entire society. The examination of IT outsourcing with the methods used is the first such research in Slovenia.

  11. Warranty claim analysis considering human factors

    International Nuclear Information System (INIS)

    Wu Shaomin

    2011-01-01

    Warranty claims are not always due to product failures. They can also be caused by two types of human factors. On the one hand, consumers might claim warranty due to misuse and/or failures caused by various human factors. Such claims might account for more than 10% of all reported claims. On the other hand, consumers might not be bothered to claim warranty for failed items that are still under warranty, or they may claim warranty after they have experienced several intermittent failures. These two types of human factors can affect warranty claim costs. However, research in this area has received rather little attention. In this paper, we propose three models to estimate the expected warranty cost when the two types of human factors are included. We consider two types of failures: intermittent and fatal failures, which might result in different claim patterns. Consumers might report claims after a fatal failure has occurred, and upon intermittent failures they might report claims after a number of failures have occurred. Numerical examples are given to validate the results derived.

  12. Chiral analysis of baryon form factors

    Energy Technology Data Exchange (ETDEWEB)

    Gail, T.A.

    2007-11-08

    This work presents an extensive theoretical investigation of the structure of the nucleon within the standard model of elementary particle physics. In particular, the long range contributions to a number of various form factors parametrizing the interactions of the nucleon with an electromagnetic probe are calculated. The theoretical framework for those calculations is chiral perturbation theory, the exact low energy limit of Quantum Chromo Dynamics, which describes such long range contributions in terms of a pion-cloud. In this theory, a nonrelativistic leading one loop order calculation of the form factors parametrizing the vector transition of a nucleon to its lowest lying resonance, the {delta}, a covariant calculation of the isovector and isoscalar vector form factors of the nucleon at next to leading one loop order and a covariant calculation of the isoscalar and isovector generalized vector form factors of the nucleon at leading one loop order are performed. In order to perform consistent loop calculations in the covariant formulation of chiral perturbation theory an appropriate renormalization scheme is defined in this work. All theoretical predictions are compared to phenomenology and results from lattice QCD simulations. These comparisons allow for a determination of the low energy constants of the theory. Furthermore, the possibility of chiral extrapolation, i.e. the extrapolation of lattice data from simulations at large pion masses down to the small physical pion mass is studied in detail. Statistical as well as systematic uncertainties are estimated for all results throughout this work. (orig.)

  13. Regression analysis of nuclear plant capacity factors

    International Nuclear Information System (INIS)

    Stocks, K.J.; Faulkner, J.I.

    1980-07-01

    Operating data on all commercial nuclear power plants of the PWR, HWR, BWR and GCR types in the Western World are analysed statistically to determine whether the explanatory variables size, year of operation, vintage and reactor supplier are significant in accounting for the variation in capacity factor. The results are compared with a number of previous studies which analysed only United States reactors. The possibility of specification errors affecting the results is also examined. Although, in general, the variables considered are statistically significant, they explain only a small portion of the variation in the capacity factor. The equations thus obtained should certainly not be used to predict the lifetime performance of future large reactors

  14. An Empirical Analysis of Job Satisfaction Factors.

    Science.gov (United States)

    1987-09-01

    have acknowledged the importance of factors which make the Air Force attractive to its members or conversely, make other employees consider...Maslow’s need hierarchy theory attempts to show that man has five basic categories of needs: physiological, safety, belongingness , esteem, and self...attained until lower-level basic needs are attained. This implies a sort of growth process where optional job environments for given employees are

  15. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2001-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  16. Parallel, Rapid Diffuse Optical Tomography of Breast

    National Research Council Canada - National Science Library

    Yodh, Arjun

    2002-01-01

    During the last year we have experimentally and computationally investigated rapid acquisition and analysis of informationally dense diffuse optical data sets in the parallel plate compressed breast geometry...

  17. Compiling Scientific Programs for Scalable Parallel Systems

    National Research Council Canada - National Science Library

    Kennedy, Ken

    2001-01-01

    ...). The research performed in this project included new techniques for recognizing implicit parallelism in sequential programs, a powerful and precise set-based framework for analysis and transformation...

  18. Robustness analysis of a parallel two-box digital polynomial predistorter for an SOA-based CO-OFDM system

    Science.gov (United States)

    Diouf, C.; Younes, M.; Noaja, A.; Azou, S.; Telescu, M.; Morel, P.; Tanguy, N.

    2017-11-01

    The linearization performance of various digital baseband pre-distortion schemes is evaluated in this paper for a coherent optical OFDM (CO-OFDM) transmitter employing a semiconductor optical amplifier (SOA). In particular, the benefits of using a parallel two-box (PTB) behavioral model, combining a static nonlinear function with a memory polynomial (MP) model, is investigated for mitigating the system nonlinearities and compared to the memoryless and MP models. Moreover, the robustness of the predistorters under different operating conditions and system uncertainties is assessed based on a precise SOA physical model. The PTB scheme proves to be the most effective linearization technique for the considered setup, with an excellent performance-complexity tradeoff over a wide range of conditions.

  19. A Factor Analysis of the BSRI and the PAQ.

    Science.gov (United States)

    Edwards, Teresa A.; And Others

    Factor analysis of the Bem Sex Role Inventory (BSRI) and the Personality Attributes Questionnaire (PAQ) was undertaken to study the independence of the masculine and feminine scales within each instrument. Both instruments were administered to undergraduate education majors. Analysis of primary first and second order factors of the BSRI indicated…

  20. Analysis and optimization of the TWINKLE factoring device

    NARCIS (Netherlands)

    Lenstra, A.K.; Shamir, A.; Preneel, B.

    2000-01-01

    We describe an enhanced version of the TWINKLE factoring device and analyse to what extent it can be expected to speed up the sieving step of the Quadratic Sieve and Number Field Sieve factoring al- gorithms. The bottom line of our analysis is that the TWINKLE-assisted factorization of 768-bit

  1. Hierarchical Factoring Based On Image Analysis And Orthoblique Rotations.

    Science.gov (United States)

    Stankov, L

    1979-07-01

    The procedure for hierarchical factoring suggested by Schmid and Leiman (1957) is applied within the framework of image analysis and orthoblique rotational procedures. It is shown that this approach necessarily leads to correlated higher order factors. Also, one can obtain a smaller number of factors than produced by typical hierarchical procedures.

  2. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  3. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  4. Modification and analysis of engineering hot spot factor of HFETR

    International Nuclear Information System (INIS)

    Hu Yuechun; Deng Caiyu; Li Haitao; Xu Taozhong; Mo Zhengyu

    2014-01-01

    This paper presents the modification and analysis of engineering hot spot factors of HFETR. The new factors are applied in the fuel temperature analysis and the estimated value of the safety allowable operating power of HFETR. The result shows the maximum cladding temperature of the fuel is lower when the new factor are in utilization, and the safety allowable operating power of HFETR if higher, thus providing the economical efficiency of HFETR. (authors)

  5. A replication of a factor analysis of motivations for trapping

    Science.gov (United States)

    Schroeder, Susan; Fulton, David C.

    2015-01-01

    Using a 2013 sample of Minnesota trappers, we employed confirmatory factor analysis to replicate an exploratory factor analysis of trapping motivations conducted by Daigle, Muth, Zwick, and Glass (1998).  We employed the same 25 items used by Daigle et al. and tested the same five-factor structure using a recent sample of Minnesota trappers. We also compared motivations in our sample to those reported by Daigle et el.

  6. Factor analysis improves the selection of prescribing indicators

    DEFF Research Database (Denmark)

    Rasmussen, Hanne Marie Skyggedal; Søndergaard, Jens; Sokolowski, Ineta

    2006-01-01

    OBJECTIVE: To test a method for improving the selection of indicators of general practitioners' prescribing. METHODS: We conducted a prescription database study including all 180 general practices in the County of Funen, Denmark, approximately 472,000 inhabitants. Principal factor analysis was us...... appropriate and inappropriate prescribing, as revealed by the correlation of the indicators in the first factor. CONCLUSION: Correlation and factor analysis is a feasible method that assists the selection of indicators and gives better insight into prescribing patterns....

  7. A model for optimizing file access patterns using spatio-temporal parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Boonthanome, Nouanesengsy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Geveci, Berk [Kitware Inc., Clifton Park, NY (United States); Ahrens, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bauer, Andy [Kitware Inc., Clifton Park, NY (United States); Chaudhary, Aashish [Kitware Inc., Clifton Park, NY (United States); Miller, Ross G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible file access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.

  8. Human factor analysis and preventive countermeasures in nuclear power plant

    International Nuclear Information System (INIS)

    Li Ye

    2010-01-01

    Based on the human error analysis theory and the characteristics of maintenance in a nuclear power plant, human factors of maintenance in NPP are divided into three different areas: human, technology, and organization. Which is defined as individual factors, including psychological factors, physiological characteristics, health status, level of knowledge and interpersonal skills; The technical factors including technology, equipment, tools, working order, etc.; The organizational factors including management, information exchange, education, working environment, team building and leadership management,etc The analysis found that organizational factors can directly or indirectly affect the behavior of staff and technical factors, is the most basic human error factor. Based on this nuclear power plant to reduce human error and measures the response. (authors)

  9. ANALYSIS OF RISK FACTORS ECTOPIC PREGNANCY

    Directory of Open Access Journals (Sweden)

    Budi Santoso

    2017-04-01

    Full Text Available Introduction: Ectopic pregnancy is a pregnancy with extrauterine implantation. This situation is gynecologic emergency that contributes to maternal mortality. Therefore, early recognition, based on identification of the causes of ectopic pregnancy risk factors, is needed. Methods: The design descriptive observational. The samples were pregnant women who had ectopic pregnancy at Maternity Room, Emergency Unit, Dr. Soetomo Hospital, Surabaya, from 1 July 2008 to 1 July 2010. Sampling technique was total sampling using medical records. Result: Patients with ectopic pregnancy were 99 individuals out of 2090 pregnant women who searched for treatment in Dr. Soetomo Hospital. However, only 29 patients were accompanied with traceable risk factors. Discussion:. Most ectopic pregnancies were in the age group of 26-30 years, comprising 32 patients (32.32%, then in age groups of 31–35 years as many as 25 patients (25.25%, 18 patients in age group 21–25 years (18.18%, 17 patients in age group 36–40 years (17.17%, 4 patients in age group 41 years and more (4.04%, and the least was in age group of 16–20 years with 3 patients (3.03%. A total of 12 patients with ectopic pregnancy (41.38% had experience of abortion and 6 patients (20.69% each in groups of patients with ectopic pregnancy who used family planning, in those who used family planning as well as ectopic pregnancy patients with history of surgery. There were 2 patients (6.90% of the group of patients ectopic pregnancy who had history of surgery and history of abortion. The incidence rate of ectopic pregnancy was 4.73%, mostly in the second gravidity (34.34%, whereas the nulliparous have the highest prevalence of 39.39%. Acquired risk factors, i.e. history of operations was 10.34%, patients with family planning 20.69%, patients with history of abortion 41.38%, patients with history of abortion and operation 6.90% patients with family and history of abortion was 20.69%.

  10. Investigating product development strategy in beverage industry using factor analysis

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available Selecting a product development strategy that is associated with the company's current service or product innovation, based on customers’ needs and changing environment, plays an important role in increasing demand, increasing market share, increasing sales and profits. Therefore, it is important to extract effective variables associated with product development to improve performance measurement of firms. This paper investigates important factors influencing product development strategies using factor analysis. The proposed model of this paper investigates 36 factors and, using factor analysis, we extract six most influential factors including information sharing, intelligence information, exposure strategy, differentiation, research and development strategy and market survey. The first strategy, partnership, includes five sub-factor including product development partnership, partnership with foreign firms, customers’ perception from competitors’ products, Customer involvement in product development, inter-agency coordination, customer-oriented approach to innovation and transmission of product development change where inter-agency coordination has been considered the most important factor. Internal strengths are the most influential factors impacting the second strategy, intelligence information. The third factor, introducing strategy, introducing strategy, includes four sub criteria and consumer buying behavior is the most influencing factor. Differentiation is the next important factor with five components where knowledge and expertise in product innovation is the most important one. Research and development strategy with four sub-criteria where reducing product development cycle plays the most influential factor and finally, market survey strategy is the last important factor with three factors and finding new market plays the most important role.

  11. Parallelization of quantum molecular dynamics simulation code

    International Nuclear Information System (INIS)

    Kato, Kaori; Kunugi, Tomoaki; Shibahara, Masahiko; Kotake, Susumu

    1998-02-01

    A quantum molecular dynamics simulation code has been developed for the analysis of the thermalization of photon energies in the molecule or materials in Kansai Research Establishment. The simulation code is parallelized for both Scalar massively parallel computer (Intel Paragon XP/S75) and Vector parallel computer (Fujitsu VPP300/12). Scalable speed-up has been obtained with a distribution to processor units by division of particle group in both parallel computers. As a result of distribution to processor units not only by particle group but also by the particles calculation that is constructed with fine calculations, highly parallelization performance is achieved in Intel Paragon XP/S75. (author)

  12. TIA: algorithms for development of identity-linked SNP islands for analysis by massively parallel DNA sequencing.

    Science.gov (United States)

    Farris, M Heath; Scott, Andrew R; Texter, Pamela A; Bartlett, Marta; Coleman, Patricia; Masters, David

    2018-04-11

    Single nucleotide polymorphisms (SNPs) located within the human genome have been shown to have utility as markers of identity in the differentiation of DNA from individual contributors. Massively parallel DNA sequencing (MPS) technologies and human genome SNP databases allow for the design of suites of identity-linked target regions, amenable to sequencing in a multiplexed and massively parallel manner. Therefore, tools are needed for leveraging the genotypic information found within SNP databases for the discovery of genomic targets that can be evaluated on MPS platforms. The SNP island target identification algorithm (TIA) was developed as a user-tunable system to leverage SNP information within databases. Using data within the 1000 Genomes Project SNP database, human genome regions were identified that contain globally ubiquitous identity-linked SNPs and that were responsive to targeted resequencing on MPS platforms. Algorithmic filters were used to exclude target regions that did not conform to user-tunable SNP island target characteristics. To validate the accuracy of TIA for discovering these identity-linked SNP islands within the human genome, SNP island target regions were amplified from 70 contributor genomic DNA samples using the polymerase chain reaction. Multiplexed amplicons were sequenced using the Illumina MiSeq platform, and the resulting sequences were analyzed for SNP variations. 166 putative identity-linked SNPs were targeted in the identified genomic regions. Of the 309 SNPs that provided discerning power across individual SNP profiles, 74 previously undefined SNPs were identified during evaluation of targets from individual genomes. Overall, DNA samples of 70 individuals were uniquely identified using a subset of the suite of identity-linked SNP islands. TIA offers a tunable genome search tool for the discovery of targeted genomic regions that are scalable in the population frequency and numbers of SNPs contained within the SNP island regions

  13. Housing price forecastability: A factor analysis

    DEFF Research Database (Denmark)

    Møller, Stig Vinther; Bork, Lasse

    2017-01-01

    We examine U.S. housing price forecastability using principal component analysis (PCA), partial least squares (PLS), and sparse PLS (SPLS). We incorporate information from a large panel of 128 economic time series and show that macroeconomic fundamentals have strong predictive power for future...... movements in housing prices. We find that (S)PLS models systematically dominate PCA models. (S)PLS models also generate significant out-of-sample predictive power over and above the predictive power contained by the price-rent ratio, autoregressive benchmarks, and regression models based on small datasets....

  14. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  15. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  16. Physics based modeling of a series parallel battery pack for asymmetry analysis, predictive control and life extension

    Science.gov (United States)

    Ganesan, Nandhini; Basu, Suman; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Yeo, Taejung; Sohn, Dong Kee; Doo, Seokgwang

    2016-08-01

    Lithium-Ion batteries used for electric vehicle applications are subject to large currents and various operation conditions, making battery pack design and life extension a challenging problem. With increase in complexity, modeling and simulation can lead to insights that ensure optimal performance and life extension. In this manuscript, an electrochemical-thermal (ECT) coupled model for a 6 series × 5 parallel pack is developed for Li ion cells with NCA/C electrodes and validated against experimental data. Contribution of the cathode to overall degradation at various operating conditions is assessed. Pack asymmetry is analyzed from a design and an operational perspective. Design based asymmetry leads to a new approach of obtaining the individual cell responses of the pack from an average ECT output. Operational asymmetry is demonstrated in terms of effects of thermal gradients on cycle life, and an efficient model predictive control technique is developed. Concept of reconfigurable battery pack is studied using detailed simulations that can be used for effective monitoring and extension of battery pack life.

  17. Factoring handedness data: I. Item analysis.

    Science.gov (United States)

    Messinger, H B; Messinger, M I

    1995-12-01

    Recently in this journal Peters and Murphy challenged the validity of factor analyses done on bimodal handedness data, suggesting instead that right- and left-handers be studied separately. But bimodality may be avoidable if attention is paid to Oldfield's questionnaire format and instructions for the subjects. Two characteristics appear crucial: a two-column LEFT-RIGHT format for the body of the instrument and what we call Oldfield's Admonition: not to indicate strong preference for handedness item, such as write, unless "... the preference is so strong that you would never try to use the other hand unless absolutely forced to...". Attaining unimodality of an item distribution would seem to overcome the objections of Peters and Murphy. In a 1984 survey in Boston we used Oldfield's ten-item questionnaire exactly as published. This produced unimodal item distributions. With reflection of the five-point item scale and a logarithmic transformation, we achieved a degree of normalization for the items. Two surveys elsewhere based on Oldfield's 20-item list but with changes in the questionnaire format and the instructions, yielded markedly different item distributions with peaks at each extreme and sometimes in the middle as well.

  18. Fast parallel event reconstruction

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  19. Model-driven product line engineering for mapping parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir

    2016-01-01

    Mapping parallel algorithms to parallel computing platforms requires several activities such as the analysis of the parallel algorithm, the definition of the logical configuration of the platform, the mapping of the algorithm to the logical configuration platform and the implementation of the

  20. Thermal-structural Analysis and Fatigue Life Evaluation of a Parallel Slide Gate Valve in Accordance with ASME B and PVC

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Ho; Han, Jeong Sam [Andong Nat’l Univ., Andong (Korea, Republic of); Jae Seung Choi [Key Valve Technologies Ltd., Siheung (Korea, Republic of)

    2017-02-15

    A parallel slide gate valve (PSGV) is located between the heat recovery steam generator (HRSG) and the steam turbine in a combined cycle power plant (CCPP). It is used to control the flow of steam and runs with repetitive operations such as startups, load changes, and shutdowns during its operation period. Therefore, it is necessary to evaluate the fatigue damage and the structural integrity under a large compressive thermal stress due to the temperature difference through the valve wall thickness during the startup operations. In this paper, the thermal-structural analysis and the fatigue life evaluation of a 16-inch PSGV, which is installed on the HP steam line, is performed according to the fatigue life assessment method described in the ASME B and PVC VIII-2; the method uses the equivalent stress from the elastic stress analysis.