WorldWideScience

Sample records for automated parallel cultures

  1. Fully automated parallel oligonucleotide synthesizer

    Czech Academy of Sciences Publication Activity Database

    Lebl, M.; Burger, Ch.; Ellman, B.; Heiner, D.; Ibrahim, G.; Jones, A.; Nibbe, M.; Thompson, J.; Mudra, Petr; Pokorný, Vít; Poncar, Pavel; Ženíšek, Karel

    2001-01-01

    Roč. 66, č. 8 (2001), s. 1299-1314 ISSN 0010-0765 Institutional research plan: CEZ:AV0Z4055905 Keywords : automated oligonucleotide synthesizer Subject RIV: CC - Organic Chemistry Impact factor: 0.778, year: 2001

  2. Multiple microfermentor battery: a versatile tool for use with automated parallel cultures of microorganisms producing recombinant proteins and for optimization of cultivation protocols.

    Science.gov (United States)

    Frachon, Emmanuel; Bondet, Vincent; Munier-Lehmann, Hélène; Bellalou, Jacques

    2006-08-01

    A multiple microfermentor battery was designed for high-throughput recombinant protein production in Escherichia coli. This novel system comprises eight aerated glass reactors with a working volume of 80 ml and a moving external optical sensor for measuring optical densities at 600 nm (OD600) ranging from 0.05 to 100 online. Each reactor can be fitted with miniature probes to monitor temperature, dissolved oxygen (DO), and pH. Independent temperature regulation for each vessel is obtained with heating/cooling Peltier devices. Data from pH, DO, and turbidity sensors are collected on a FieldPoint (National Instruments) I/O interface and are processed and recorded by a LabVIEW program on a personal computer, which enables feedback control of the culture parameters. A high-density medium formulation was designed, which enabled us to grow E. coli to OD600 up to 100 in batch cultures with oxygen-enriched aeration. Accordingly, the biomass and the amount of recombinant protein produced in a 70-ml culture were at least equivalent to the biomass and the amount of recombinant protein obtained in a Fernbach flask with 1 liter of conventional medium. Thus, the microfermentor battery appears to be well suited for automated parallel cultures and process optimization, such as that needed for structural genomics projects.

  3. Automated Parallel Capillary Electrophoretic System

    Science.gov (United States)

    Li, Qingbo; Kane, Thomas E.; Liu, Changsheng; Sonnenschein, Bernard; Sharer, Michael V.; Kernan, John R.

    2000-02-22

    An automated electrophoretic system is disclosed. The system employs a capillary cartridge having a plurality of capillary tubes. The cartridge has a first array of capillary ends projecting from one side of a plate. The first array of capillary ends are spaced apart in substantially the same manner as the wells of a microtitre tray of standard size. This allows one to simultaneously perform capillary electrophoresis on samples present in each of the wells of the tray. The system includes a stacked, dual carousel arrangement to eliminate cross-contamination resulting from reuse of the same buffer tray on consecutive executions from electrophoresis. The system also has a gel delivery module containing a gel syringe/a stepper motor or a high pressure chamber with a pump to quickly and uniformly deliver gel through the capillary tubes. The system further includes a multi-wavelength beam generator to generate a laser beam which produces a beam with a wide range of wavelengths. An off-line capillary reconditioner thoroughly cleans a capillary cartridge to enable simultaneous execution of electrophoresis with another capillary cartridge. The streamlined nature of the off-line capillary reconditioner offers the advantage of increased system throughput with a minimal increase in system cost.

  4. Automated parallel recordings of topologically identified single ion channels.

    Science.gov (United States)

    Kawano, Ryuji; Tsuji, Yutaro; Sato, Koji; Osaki, Toshihisa; Kamiya, Koki; Hirano, Minako; Ide, Toru; Miki, Norihisa; Takeuchi, Shoji

    2013-01-01

    Although ion channels are attractive targets for drug discovery, the systematic screening of ion channel-targeted drugs remains challenging. To facilitate automated single ion-channel recordings for the analysis of drug interactions with the intra- and extracellular domain, we have developed a parallel recording methodology using artificial cell membranes. The use of stable lipid bilayer formation in droplet chamber arrays facilitated automated, parallel, single-channel recording from reconstituted native and mutated ion channels. Using this system, several types of ion channels, including mutated forms, were characterised by determining the protein orientation. In addition, we provide evidence that both intra- and extracellular amyloid-beta fragments directly inhibit the channel open probability of the hBK channel. This automated methodology provides a high-throughput drug screening system for the targeting of ion channels and a data-intensive analysis technique for studying ion channel gating mechanisms.

  5. Parallel symbolic execution for automated real-world software testing

    OpenAIRE

    Bucur, Stefan; Ureche, Vlad; Zamfir, Cristian; Candea, George

    2011-01-01

    This paper introduces Cloud9, a platform for automated testing of real-world software. Our main contribution is the scalable parallelization of symbolic execution on clusters of commodity hardware, to help cope with path explosion. Cloud9 provides a systematic interface for writing "symbolic tests" that concisely specify entire families of inputs and behaviors to be tested, thus improving testing productivity. Cloud9 can handle not only single-threaded programs but also multi-threaded and dis...

  6. StrAuto: automation and parallelization of STRUCTURE analysis.

    Science.gov (United States)

    Chhatre, Vikram E; Emerson, Kevin J

    2017-03-24

    Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .

  7. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  8. Effects of corporate culture on the implementation of automation in ...

    African Journals Online (AJOL)

    This study Surveyed Corporate Culture components of Attitudes, Beliefs and Assumptions on the Implementation of Automation in Libraries of Federal Universities in the North East Zone of Nigeria. The objectives of the study were to determine: the level of implementation of automation of libraries in federal universities of the ...

  9. A hierarchical, automated target recognition algorithm for a parallel analog processor

    Science.gov (United States)

    Woodward, Gail; Padgett, Curtis

    1997-01-01

    A hierarchical approach is described for an automated target recognition (ATR) system, VIGILANTE, that uses a massively parallel, analog processor (3DANN). The 3DANN processor is capable of performing 64 concurrent inner products of size 1x4096 every 250 nanoseconds.

  10. The Protein Maker: an automated system for high-throughput parallel purification

    International Nuclear Information System (INIS)

    Smith, Eric R.; Begley, Darren W.; Anderson, Vanessa; Raymond, Amy C.; Haffner, Taryn E.; Robinson, John I.; Edwards, Thomas E.; Duncan, Natalie; Gerdts, Cory J.; Mixon, Mark B.; Nollert, Peter; Staker, Bart L.; Stewart, Lance J.

    2011-01-01

    The Protein Maker instrument addresses a critical bottleneck in structural genomics by allowing automated purification and buffer testing of multiple protein targets in parallel with a single instrument. Here, the use of this instrument to (i) purify multiple influenza-virus proteins in parallel for crystallization trials and (ii) identify optimal lysis-buffer conditions prior to large-scale protein purification is described. The Protein Maker is an automated purification system developed by Emerald BioSystems for high-throughput parallel purification of proteins and antibodies. This instrument allows multiple load, wash and elution buffers to be used in parallel along independent lines for up to 24 individual samples. To demonstrate its utility, its use in the purification of five recombinant PB2 C-terminal domains from various subtypes of the influenza A virus is described. Three of these constructs crystallized and one diffracted X-rays to sufficient resolution for structure determination and deposition in the Protein Data Bank. Methods for screening lysis buffers for a cytochrome P450 from a pathogenic fungus prior to upscaling expression and purification are also described. The Protein Maker has become a valuable asset within the Seattle Structural Genomics Center for Infectious Disease (SSGCID) and hence is a potentially valuable tool for a variety of high-throughput protein-purification applications

  11. Effects of corporate culture on the implementation of automation in ...

    African Journals Online (AJOL)

    This study determined the perception of library staff on the effects corporate culture components of (values, norms, attitudes and beliefs) whether they are responsible for the non –implementation of automation in libraries of federal Universities in the North East Zone of Nigeria. The objectives of the study is to determine: the ...

  12. Anthropology and cultural neuroscience: creating productive intersections in parallel fields.

    Science.gov (United States)

    Brown, R A; Seligman, R

    2009-01-01

    Partly due to the failure of anthropology to productively engage the fields of psychology and neuroscience, investigations in cultural neuroscience have occurred largely without the active involvement of anthropologists or anthropological theory. Dramatic advances in the tools and findings of social neuroscience have emerged in parallel with significant advances in anthropology that connect social and political-economic processes with fine-grained descriptions of individual experience and behavior. We describe four domains of inquiry that follow from these recent developments, and provide suggestions for intersections between anthropological tools - such as social theory, ethnography, and quantitative modeling of cultural models - and cultural neuroscience. These domains are: the sociocultural construction of emotion, status and dominance, the embodiment of social information, and the dual social and biological nature of ritual. Anthropology can help locate unique or interesting populations and phenomena for cultural neuroscience research. Anthropological tools can also help "drill down" to investigate key socialization processes accountable for cross-group differences. Furthermore, anthropological research points at meaningful underlying complexity in assumed relationships between social forces and biological outcomes. Finally, ethnographic knowledge of cultural content can aid with the development of ecologically relevant stimuli for use in experimental protocols.

  13. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Science.gov (United States)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  14. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA

    Science.gov (United States)

    TakYu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping

    2015-06-01

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL-1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications.

  15. Fast automated airborne electromagnetic data interpretation using parallelized particle swarm optimization

    Science.gov (United States)

    Desmarais, Jacques K.; Spiteri, Raymond J.

    2017-12-01

    A parallelized implementation of the particle swarm optimization algorithm is developed. We use the optimization procedure to speed up a previously published algorithm for airborne electromagnetic data interpretation. This algorithm is the only parametrized automated procedure for extracting the three-dimensionally varying geometrical parameters of conductors embedded in a resistive environment, such as igneous and metamorphic terranes. When compared to the original algorithm, the new optimization procedure is faster by two orders of magnitude (factor of 100). Synthetic model tests show that for the chosen system architecture and objective function, the particle swarm optimization approach depends very weakly on the rate of communication of the processors. Optimal wall-clock times are obtained using three processors. The increased performance means that the algorithm can now easily be used for fast routine interpretation of airborne electromagnetic surveys consisting of several anomalies, as is displayed by a test on MEGATEM field data collected at the Chibougamau site, Québec.

  16. CULTURE AND TECHNOLOGY: AUTOMATION IN THE CREATIVE PROCESSES OF NARRATIVE

    Directory of Open Access Journals (Sweden)

    Fernando Fogliano

    2013-12-01

    Full Text Available The objective here is to think on the problem raised by the progressively opaque presence of technology in the contemporary artistic production. Automation is the most evident aspect of technology of devices used for production, post-production and dissemination of this cultural activity. Along the text the philosophers Vilém Flusser and Gilbert Simon are put in confrontation so that a more profound insight can be obtained. Language is considered here as the integrative factor in the search for a new convergent conceptual scenario that enable us understand the consequences of the technological convergence

  17. Electrical defibrillation optimization: An automated, iterative parallel finite-element approach

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, S.A.; Shadid, J.N. [Sandia National Lab., Albuquerque, NM (United States); Ng, K.T. [New Mexico State Univ., Las Cruces, NM (United States); Nadeem, A. [Univ. of Pittsburgh, PA (United States)

    1997-04-01

    To date, optimization of electrode systems for electrical defibrillation has been limited to hand-selected electrode configurations. In this paper we present an automated approach which combines detailed, three-dimensional (3-D) finite element torso models with optimization techniques to provide a flexible analysis and design tool for electrical defibrillation optimization. Specifically, a parallel direct search (PDS) optimization technique is used with a representative objective function to find an electrode configuration which corresponds to the satisfaction of a postulated defibrillation criterion with a minimum amount of power and a low possibility of myocardium damage. For adequate representation of the thoracic inhomogeneities, 3-D finite-element torso models are used in the objective function computations. The CPU-intensive finite-element calculations required for the objective function evaluation have been implemented on a message-passing parallel computer in order to complete the optimization calculations in a timely manner. To illustrate the optimization procedure, it has been applied to a representative electrode configuration for transmyocardial defibrillation, namely the subcutaneous patch-right ventricular catheter (SP-RVC) system. Sensitivity of the optimal solutions to various tissue conductivities has been studied. 39 refs., 9 figs., 2 tabs.

  18. Automated integration of genomic physical mapping data via parallel simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  19. The Influence of Cultural Factors on Trust in Automation

    Science.gov (United States)

    Chien, Shih-Yi James

    2016-01-01

    Human interaction with automation is a complex process that requires both skilled operators and complex system designs to effectively enhance overall performance. Although automation has successfully managed complex systems throughout the world for over half a century, inappropriate reliance on automation can still occur, such as the recent…

  20. Automated long-term monitoring of parallel microfluidic operations applying a machine vision-assisted positioning method.

    Science.gov (United States)

    Yip, Hon Ming; Li, John C S; Xie, Kai; Cui, Xin; Prasad, Agrim; Gao, Qiannan; Leung, Chi Chiu; Lam, Raymond H W

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities.

  1. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    Directory of Open Access Journals (Sweden)

    Hon Ming Yip

    2014-01-01

    Full Text Available As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities.

  2. An Extended Case Study Methoology for Investigating Influence of Cultural, Organizational, and Automation Factors on Human-Automation Trust

    Science.gov (United States)

    Koltai, Kolina Sun; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Johnson, Walter; Cacanindin, Artemio

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Forces newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the cases politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerabilityhigh risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  3. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation.

    Science.gov (United States)

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho

    2014-11-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.

  4. "Parallel Leadership in an "Unparallel" World"--Cultural Constraints on the Transferability of Western Educational Leadership Theories across Cultures

    Science.gov (United States)

    Goh, Jonathan Wee Pin

    2009-01-01

    With the global economy becoming more integrated, the issues of cross-cultural relevance and transferability of leadership theories and practices have become increasingly urgent. Drawing upon the concept of parallel leadership in schools proposed by Crowther, Kaagan, Ferguson, and Hann as an example, the purpose of this paper is to examine the…

  5. Culture medium optimization for osmotolerant yeasts by use of a parallel fermenter system and rapid microbiological testing.

    Science.gov (United States)

    Pfannebecker, Jens; Schiffer-Hetz, Claudia; Fröhlich, Jürgen; Becker, Barbara

    2016-11-01

    In the present study, a culture medium for qualitative detection of osmotolerant yeasts, named OM, was developed. For the development, culture media with different concentrations of glucose, fructose, potassium chloride and glycerin were analyzed in a Biolumix™ test incubator. Selectivity for osmotolerant yeasts was guaranteed by a water activity (a w )-value of 0.91. The best results regarding fast growth of Zygosaccharomyces rouxii (WH 1002) were achieved in a culture medium consisting of 45% glucose, 5% fructose and 0.5% yeast extract and in a medium with 30% glucose, 10% glycerin, 5% potassium chloride and 0.5% yeast extract. Substances to stimulate yeast fermentation rates were analyzed in a RAMOS ® parallel fermenter system, enabling online measurement of the carbon dioxide transfer rate (CTR) in shaking flasks. Significant increases of the CTR was achieved by adding especially 0.1-0.2% ammonium salts ((NH 4 ) 2 HPO 4 , (NH 4 ) 2 SO 4 or NH 4 NO 3 ), 0.5% meat peptone and 1% malt extract. Detection times and the CTR of 23 food-borne yeast strains of the genera Zygosaccharomyces, Torulaspora, Schizosaccharomyces, Candida and Wickerhamomyces were analyzed in OM bouillon in comparison to the selective culture media YEG50, MYG50 and DG18 in the parallel fermenter system. The OM culture medium enabled the detection of 10 2 CFU/g within a time period of 2-3days, depending on the analyzed yeast species. Compared with YEG50 and MYG50 the detection times could be reduced. As an example, W. anomalus (WH 1021) was detected after 124h in YEG50, 95.5h in MYG50 and 55h in OM bouillon. Compared to YEG50 the maximum CO 2 transfer rates for Z. rouxii (WH 1001), T. delbrueckii (DSM 70526), S. pombe (DSM 70576) and W. anomalus (WH 1016) increased by a factor ≥2.6. Furthermore, enrichment cultures of inoculated high-sugar products in OM culture medium were analyzed in the Biolumix™ system. The results proved that detection times of 3days for Z. rouxii and T. delbrueckii

  6. Miniaturized Mass-Spectrometry-Based Analysis System for Fully Automated Examination of Conditioned Cell Culture Media

    NARCIS (Netherlands)

    Weber, E.; Pinkse, M.W.H.; Bener-Aksam, E.; Vellekoop, M.J.; Verhaert, P.D.E.M.

    2012-01-01

    We present a fully automated setup for performing in-line mass spectrometry (MS) analysis of conditioned media in cell cultures, in particular focusing on the peptides therein. The goal is to assess peptides secreted by cells in different culture conditions. The developed system is compatible with

  7. Pursuing Darwin's curious parallel: Prospects for a science of cultural evolution.

    Science.gov (United States)

    Mesoudi, Alex

    2017-07-24

    In the past few decades, scholars from several disciplines have pursued the curious parallel noted by Darwin between the genetic evolution of species and the cultural evolution of beliefs, skills, knowledge, languages, institutions, and other forms of socially transmitted information. Here, I review current progress in the pursuit of an evolutionary science of culture that is grounded in both biological and evolutionary theory, but also treats culture as more than a proximate mechanism that is directly controlled by genes. Both genetic and cultural evolution can be described as systems of inherited variation that change over time in response to processes such as selection, migration, and drift. Appropriate differences between genetic and cultural change are taken seriously, such as the possibility in the latter of nonrandomly guided variation or transformation, blending inheritance, and one-to-many transmission. The foundation of cultural evolution was laid in the late 20th century with population-genetic style models of cultural microevolution, and the use of phylogenetic methods to reconstruct cultural macroevolution. Since then, there have been major efforts to understand the sociocognitive mechanisms underlying cumulative cultural evolution, the consequences of demography on cultural evolution, the empirical validity of assumed social learning biases, the relative role of transformative and selective processes, and the use of quantitative phylogenetic and multilevel selection models to understand past and present dynamics of society-level change. I conclude by highlighting the interdisciplinary challenges of studying cultural evolution, including its relation to the traditional social sciences and humanities.

  8. Parallel autonomy in automated vehicles : Safe motion generation with minimal intervention

    NARCIS (Netherlands)

    Schwarting, Wilko; Alonso Mora, J.; Pauli, Liam; Karaman, Sertac; Rus, Daniela; Chen, I-Ming; Nakamura, Yoshihiko

    2017-01-01

    Current state-of-the-art vehicle safety systems, such as assistive braking or automatic lane following, are still only able to help in relatively simple driving situations. We introduce a Parallel Autonomy shared-control framework that produces safe trajectories based on human inputs even in much

  9. Parallelized system for biopolymer degradation studies through automated microresonator measurement in liquid flow

    DEFF Research Database (Denmark)

    Casci Ceccacci, Andrea; Morelli, Lidia; Bosco, Filippo

    2015-01-01

    In this work we present a novel automated system which allows the study of enzymatic degradation of biopolymer films coated on micromechanical resonators. The system combines an optical readout based on Blu-Ray technology with a software-controlled scanning mechanism. Integrated with a microfluidic...... setup unit, the system allows high-throughput measurements of resonance frequency over microresonator arrays under controlled flow conditions. We here demonstrate the acquisition of statistical data on biopolymer films degradation under enzymatic reaction over a large sample of micromechanical...

  10. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    Science.gov (United States)

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin

    2015-02-01

    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time.

  11. Prioritizing multiple therapeutic targets in parallel using automated DNA-encoded library screening

    Science.gov (United States)

    Machutta, Carl A.; Kollmann, Christopher S.; Lind, Kenneth E.; Bai, Xiaopeng; Chan, Pan F.; Huang, Jianzhong; Ballell, Lluis; Belyanskaya, Svetlana; Besra, Gurdyal S.; Barros-Aguirre, David; Bates, Robert H.; Centrella, Paolo A.; Chang, Sandy S.; Chai, Jing; Choudhry, Anthony E.; Coffin, Aaron; Davie, Christopher P.; Deng, Hongfeng; Deng, Jianghe; Ding, Yun; Dodson, Jason W.; Fosbenner, David T.; Gao, Enoch N.; Graham, Taylor L.; Graybill, Todd L.; Ingraham, Karen; Johnson, Walter P.; King, Bryan W.; Kwiatkowski, Christopher R.; Lelièvre, Joël; Li, Yue; Liu, Xiaorong; Lu, Quinn; Lehr, Ruth; Mendoza-Losana, Alfonso; Martin, John; McCloskey, Lynn; McCormick, Patti; O'Keefe, Heather P.; O'Keeffe, Thomas; Pao, Christina; Phelps, Christopher B.; Qi, Hongwei; Rafferty, Keith; Scavello, Genaro S.; Steiginga, Matt S.; Sundersingh, Flora S.; Sweitzer, Sharon M.; Szewczuk, Lawrence M.; Taylor, Amy; Toh, May Fern; Wang, Juan; Wang, Minghui; Wilkins, Devan J.; Xia, Bing; Yao, Gang; Zhang, Jean; Zhou, Jingye; Donahue, Christine P.; Messer, Jeffrey A.; Holmes, David; Arico-Muendel, Christopher C.; Pope, Andrew J.; Gross, Jeffrey W.; Evindar, Ghotas

    2017-07-01

    The identification and prioritization of chemically tractable therapeutic targets is a significant challenge in the discovery of new medicines. We have developed a novel method that rapidly screens multiple proteins in parallel using DNA-encoded library technology (ELT). Initial efforts were focused on the efficient discovery of antibacterial leads against 119 targets from Acinetobacter baumannii and Staphylococcus aureus. The success of this effort led to the hypothesis that the relative number of ELT binders alone could be used to assess the ligandability of large sets of proteins. This concept was further explored by screening 42 targets from Mycobacterium tuberculosis. Active chemical series for six targets from our initial effort as well as three chemotypes for DHFR from M. tuberculosis are reported. The findings demonstrate that parallel ELT selections can be used to assess ligandability and highlight opportunities for successful lead and tool discovery.

  12. effects of corporate culture on the implementation of automation

    African Journals Online (AJOL)

    corporate culture develop and is manifested within an organization. It is against this background that organizational members, with their values, attitudes, beliefs, assumptions and norms play a dominant role. Corporate culture therefore, has the effect of conveying a sense of identity and unity of purpose to members of.

  13. A framework for accelerated phototrophic bioprocess development: integration of parallelized microscale cultivation, laboratory automation and Kriging-assisted experimental design.

    Science.gov (United States)

    Morschett, Holger; Freier, Lars; Rohde, Jannis; Wiechert, Wolfgang; von Lieres, Eric; Oldiges, Marco

    2017-01-01

    Even though microalgae-derived biodiesel has regained interest within the last decade, industrial production is still challenging for economic reasons. Besides reactor design, as well as value chain and strain engineering, laborious and slow early-stage parameter optimization represents a major drawback. The present study introduces a framework for the accelerated development of phototrophic bioprocesses. A state-of-the-art micro-photobioreactor supported by a liquid-handling robot for automated medium preparation and product quantification was used. To take full advantage of the technology's experimental capacity, Kriging-assisted experimental design was integrated to enable highly efficient execution of screening applications. The resulting platform was used for medium optimization of a lipid production process using Chlorella vulgaris toward maximum volumetric productivity. Within only four experimental rounds, lipid production was increased approximately threefold to 212 ± 11 mg L -1  d -1 . Besides nitrogen availability as a key parameter, magnesium, calcium and various trace elements were shown to be of crucial importance. Here, synergistic multi-parameter interactions as revealed by the experimental design introduced significant further optimization potential. The integration of parallelized microscale cultivation, laboratory automation and Kriging-assisted experimental design proved to be a fruitful tool for the accelerated development of phototrophic bioprocesses. By means of the proposed technology, the targeted optimization task was conducted in a very timely and material-efficient manner.

  14. [Evaluation of an automated streaking system of urine samples for urine cultures].

    Science.gov (United States)

    Bustamante, Verónica; Meza, Paulina; Román, Juan C; García, Patricia

    2014-12-01

    Automated systems have simplified laboratory workflow, improved standardization, traceability and diminished human errors and workload. Although microbiology laboratories have little automation, in recent years new tools for automating pre analytical steps have appeared. To assess the performance of an automated streaking machine for urine cultures and its agreement with the conventional manual plating method for semi quantitative colony counts. 495 urine samples for urinary culture were inoculated in CPS® agar using our standard protocol and the PREVI™ Isola. Rates of positivity, negativity, polymicrobial growth, bacterial species, colony counts and re-isolation requirements were compared. Agreement was achieved in 98.97% of the positive/negative results, in 99.39% of the polymicrobial growth, 99.76% of bacterial species isolated and in 98.56 % of colony counts. The need for re-isolation of colonies decreased from 12.1% to 1.1% using the automated system. PREVI™ Isola's performance was as expected, time saving and improving bacterial isolation. It represents a helpful tool for laboratory automation.

  15. A Parallel Multiblock Structured Grid Method with Automated Interblocked Unstructured Grids for Chemically Reacting Flows

    Science.gov (United States)

    Spiegel, Seth Christian

    An automated method for using unstructured grids to patch non- C0 interfaces between structured blocks has been developed in conjunction with a finite-volume method for solving chemically reacting flows on unstructured grids. Although the standalone unstructured solver, FVFLO-NCSU, is capable of resolving flows for high-speed aeropropulsion devices with complex geometries, unstructured-mesh algorithms are inherently inefficient when compared to their structured counterparts. However, the advantages of structured algorithms in developing a flow solution in a timely manner can be negated by the amount of time required to develop a mesh for complex geometries. The global domain can be split up into numerous smaller blocks during the grid-generation process to alleviate some of the difficulties in creating these complex meshes. An even greater abatement can be found by allowing the nodes on abutting block interfaces to be nonmatching or non-C 0 continuous. One code capable of solving chemically reacting flows on these multiblock grids is VULCAN, which uses a nonconservative approach for patching non-C0 block interfaces. The developed automated unstructured-grid patching algorithm has been installed within VULCAN to provide it the capability of a fully conservative approach for patching non-C0 block interfaces. Additionally, the FVFLO-NCSU solver algorithms have been deeply intertwined with the VULCAN source code to solve chemically reacting flows on these unstructured patches. Finally, the CGNS software library was added to the VULCAN postprocessor so structured and unstructured data can be stored in a single compact file. This final upgrade to VULCAN has been successfully installed and verified using test cases with particular interest towards those involving grids with non- C0 block interfaces.

  16. Human mixed lymphocyte cultures. Evaluation of microculture technique utilizing the multiple automated sample harvester (MASH)

    Science.gov (United States)

    Thurman, G. B.; Strong, D. M.; Ahmed, A.; Green, S. S.; Sell, K. W.; Hartzman, R. J.; Bach, F. H.

    1973-01-01

    Use of lymphocyte cultures for in vitro studies such as pretransplant histocompatibility testing has established the need for standardization of this technique. A microculture technique has been developed that has facilitated the culturing of lymphocytes and increased the quantity of cultures feasible, while lowering the variation between replicate samples. Cultures were prepared for determination of tritiated thymidine incorporation using a Multiple Automated Sample Harvester (MASH). Using this system, the parameters that influence the in vitro responsiveness of human lymphocytes to allogeneic lymphocytes have been investigated. PMID:4271568

  17. Two-dimensional parallel array technology as a new approach to automated combinatorial solid-phase organic synthesis

    Science.gov (United States)

    Brennan; Biddison; Frauendorf; Schwarcz; Keen; Ecker; Davis; Tinder; Swayze

    1998-01-01

    An automated, 96-well parallel array synthesizer for solid-phase organic synthesis has been designed and constructed. The instrument employs a unique reagent array delivery format, in which each reagent utilized has a dedicated plumbing system. An inert atmosphere is maintained during all phases of a synthesis, and temperature can be controlled via a thermal transfer plate which holds the injection molded reaction block. The reaction plate assembly slides in the X-axis direction, while eight nozzle blocks holding the reagent lines slide in the Y-axis direction, allowing for the extremely rapid delivery of any of 64 reagents to 96 wells. In addition, there are six banks of fixed nozzle blocks, which deliver the same reagent or solvent to eight wells at once, for a total of 72 possible reagents. The instrument is controlled by software which allows the straightforward programming of the synthesis of a larger number of compounds. This is accomplished by supplying a general synthetic procedure in the form of a command file, which calls upon certain reagents to be added to specific wells via lookup in a sequence file. The bottle position, flow rate, and concentration of each reagent is stored in a separate reagent table file. To demonstrate the utility of the parallel array synthesizer, a small combinatorial library of hydroxamic acids was prepared in high throughput mode for biological screening. Approximately 1300 compounds were prepared on a 10 μmole scale (3-5 mg) in a few weeks. The resulting crude compounds were generally >80% pure, and were utilized directly for high throughput screening in antibacterial assays. Several active wells were found, and the activity was verified by solution-phase synthesis of analytically pure material, indicating that the system described herein is an efficient means for the parallel synthesis of compounds for lead discovery. Copyright 1998 John Wiley & Sons, Inc.

  18. Development and automation of photobioreactors for microalgae intensive cultures for the use in industrial gas studies

    OpenAIRE

    Debelius, B.; Hernández, C.; Gurgel, H.; Bueno, A.; Ponce, R.; Ortega, T,; Gómez Parra, A.; Lubián, L.M; Forja, J,M

    2011-01-01

    Although photobioreactors provide much more advantages over open cultivation systems, still more work has to be done in making them cost effective to set up and to operate than conventional pipe reactors and which give high algae yields. This study develops the design of two automation tubular photobioreactors of 550 L for intensive microalgae cultures.

  19. Effects of Corporate Culture on the Implementation of Automation in ...

    African Journals Online (AJOL)

    Effects of corporate culture toward achieving a change in any organization cannot be over ... class activities. Knowing the resources that are available in the library helps faculty members provide meaningful topics for research and evaluate topics of interest to students. ..... Psychology of interpersonal relationship. Oxford: ...

  20. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    Science.gov (United States)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  1. Explorations of Space-Charge Limits in Parallel-Plate Diodes and Associated Techniques for Automation

    Science.gov (United States)

    Ragan-Kelley, Benjamin

    Space-charge limited flow is a topic of much interest and varied application. We extend existing understanding of space-charge limits by simulations, and develop new tools and techniques for doing these simulations along the way. The Child-Langmuir limit is a simple analytic solution for space-charge limited current density in a one-dimensional diode. It has been previously extended to two dimensions by numerical calculation in planar geometries. By considering an axisymmetric cylindrical system with axial emission from a circular cathode of finite radius r and outer drift tube R > r and gap length L, we further examine the space charge limit in two dimensions. We simulate a two-dimensional axisymmetric parallel plate diode of various aspect ratios (r/L), and develop a scaling law for the measured two-dimensional space-charge limit (2DSCL) relative to the Child-Langmuir limit as a function of the aspect ratio of the diode. These simulations are done with a large (100T) longitudinal magnetic field to restrict electron motion to 1D, with the two-dimensional particle-in-cell simulation code OOPIC. We find a scaling law that is a monotonically decreasing function of this aspect ratio, and the one-dimensional result is recovered in the limit as r >> L. The result is in good agreement with prior results in planar geometry, where the emission area is proportional to the cathode width. We find a weak contribution from the effects of the drift tube for current at the beam edge, and a strong contribution of high current-density "wings" at the outer-edge of the beam, with a very large relative contribution when the beam is narrow. Mechanisms for enhancing current beyond the Child-Langmuir limit remain a matter of great importance. We analyze the enhancement effects of upstream ion injection on the transmitted current in a one-dimensional parallel plate diode. Electrons are field-emitted at the cathode, and ions are injected at a controlled current from the anode. An analytic

  2. Parallel expression of synaptophysin and evoked neurotransmitter release during development of cultured neurons

    DEFF Research Database (Denmark)

    Ehrhart-Bornstein, M; Treiman, M; Hansen, Gert Helge

    1991-01-01

    by quantitative immunoblotting and light microscope immunocytochemistry, respectively. In both cell types, a close parallelism was found between the temporal pattern of development in synaptophysin expression and neurotransmitter release. This temporal pattern differed between the two types of neurons......Primary cultures of GABAergic cerebral cortex neurons and glutamatergic cerebellar granule cells were used to study the expression of synaptophysin, a synaptic vesicle marker protein, along with the ability of each cell type to release neurotransmitter upon stimulation. The synaptophysin expression...... and neurotransmitter release were measured in each of the culture types as a function of development for up to 8 days in vitro, using the same batch of cells for both sets of measurements to obtain optimal comparisons. The content and the distribution of synaptophysin in the developing cells were assessed...

  3. Parallel expression of synaptophysin and evoked neurotransmitter release during development of cultured neurons

    DEFF Research Database (Denmark)

    Ehrhart-Bornstein, M; Treiman, M; Hansen, Gert Helge

    1991-01-01

    Primary cultures of GABAergic cerebral cortex neurons and glutamatergic cerebellar granule cells were used to study the expression of synaptophysin, a synaptic vesicle marker protein, along with the ability of each cell type to release neurotransmitter upon stimulation. The synaptophysin expression...... and neurotransmitter release were measured in each of the culture types as a function of development for up to 8 days in vitro, using the same batch of cells for both sets of measurements to obtain optimal comparisons. The content and the distribution of synaptophysin in the developing cells were assessed...... by quantitative immunoblotting and light microscope immunocytochemistry, respectively. In both cell types, a close parallelism was found between the temporal pattern of development in synaptophysin expression and neurotransmitter release. This temporal pattern differed between the two types of neurons...

  4. Media and Cultural Industries Internships: A Thematic Review and Digital Labor Parallels

    Directory of Open Access Journals (Sweden)

    Thomas Corrigan

    2015-09-01

    Full Text Available This article reviews existing research on the motivations and experiences of interns in media and cultural industries. Digital labour theories are used to organize and make sense of the existing internship literature. Throughout the article, parallels are also drawn between the experiences of interns and those of digital creative labourers—both professionals and peer producers. Three key themes are identified within the internship literature: 1 interns derive satisfaction from work they con- sider meaningful, particularly hands-on work executed under the training and trust of effective supervisors; 2 interns see their work as future-oriented investments in their skills, professional networks, and personal brands; and 3 the ambiguity and professional necessity of media and cultural industries internships make them fertile ground for exploitation and self-exploitation. In conclusion, I argue that attentiveness to meaning, temporality, and ambiguity will be essential to future critical investigations of internships.

  5. Performance of Gram staining on blood cultures flagged negative by an automated blood culture system.

    Science.gov (United States)

    Peretz, A; Isakovich, N; Pastukh, N; Koifman, A; Glyatman, T; Brodsky, D

    2015-08-01

    Blood is one of the most important specimens sent to a microbiology laboratory for culture. Most blood cultures are incubated for 5-7 days, except in cases where there is a suspicion of infection caused by microorganisms that proliferate slowly, or infections expressed by a small number of bacteria in the bloodstream. Therefore, at the end of incubation, misidentification of positive cultures and false-negative results are a real possibility. The aim of this work was to perform a confirmation by Gram staining of the lack of any microorganisms in blood cultures that were identified as negative by the BACTEC™ FX system at the end of incubation. All bottles defined as negative by the BACTEC FX system were Gram-stained using an automatic device and inoculated on solid growth media. In our work, 15 cultures that were defined as negative by the BACTEC FX system at the end of the incubation were found to contain microorganisms when Gram-stained. The main characteristic of most bacteria and fungi growing in the culture bottles that were defined as negative was slow growth. This finding raises a problematic issue concerning the need to perform Gram staining of all blood cultures, which could overload the routine laboratory work, especially laboratories serving large medical centers and receiving a large number of blood cultures.

  6. Influence of Cultural, Organizational, and Automation Capability on Human Automation Trust: A Case Study of Auto-GCAS Experimental Test Pilots

    Science.gov (United States)

    Koltai, Kolina; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Cacanindin, Artemio; Johnson, Walter; Lyons, Joseph

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Force's newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the case's politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerability/ high risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  7. A comparison of long-term parallel measurements of sunshine duration obtained with a Campbell-Stokes sunshine recorder and two automated sunshine sensors

    Science.gov (United States)

    Baumgartner, D. J.; Pötzi, W.; Freislich, H.; Strutzmann, H.; Veronig, A. M.; Foelsche, U.; Rieder, H. E.

    2017-06-01

    In recent decades, automated sensors for sunshine duration (SD) measurements have been introduced in meteorological networks, thereby replacing traditional instruments, most prominently the Campbell-Stokes (CS) sunshine recorder. Parallel records of automated and traditional SD recording systems are rare. Nevertheless, such records are important to understand the differences/similarities in SD totals obtained with different instruments and how changes in monitoring device type affect the homogeneity of SD records. This study investigates the differences/similarities in parallel SD records obtained with a CS and two automated SD sensors between 2007 and 2016 at the Kanzelhöhe Observatory, Austria. Comparing individual records of daily SD totals, we find differences of both positive and negative sign, with smallest differences between the automated sensors. The larger differences between CS-derived SD totals and those from automated sensors can be attributed (largely) to the higher sensitivity threshold of the CS instrument. Correspondingly, the closest agreement among all sensors is found during summer, the time of year when sensitivity thresholds are least critical. Furthermore, we investigate the performance of various models to create the so-called sensor-type-equivalent (STE) SD records. Our analysis shows that regression models including all available data on daily (or monthly) time scale perform better than simple three- (or four-) point regression models. Despite general good performance, none of the considered regression models (of linear or quadratic form) emerges as the "optimal" model. Although STEs prove useful for relating SD records of individual sensors on daily/monthly time scales, this does not ensure that STE (or joint) records can be used for trend analysis.

  8. FY1995 study of low power LSI design automation software with parallel processing; 1995 nendo heiretsu shori wo katsuyoshita shodenryoku LSI muke sekkei jidoka software no kenkyu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The needs for low power LSIs have rapidly increased recently. For the low power LSI development, not only new circuit technologies but also new design automation tools supporting the new technologies are indispensable. The purpose of this project is to develop a new design automation software, which is able to design new digital LSIs with much lower power than that of conventional CMOS LSIs. A new design automation software for very low power LSIs has been developed targeting the pass-transistor logic SPL, a dedicated low power circuit technology. The software includes a logic synthesis function for pass-transistor-based macrocells and a macrocell placement function. Several new algorithms have been developed for the software, e.g. BDD construction. Some of them are designed and implemented for parallel processing in order to reduce the processing time. The logic synthesis function was tested on a set of benchmarks and finally applied to a low power CPU design. The designed 8-bit CPU was fully compatible with Zilog Z-80. The power dissipation of the CPU was compared with that of commercial CMOS Z-80. At most 82% of power of CMOS was reduced by the new CPU. On the other hand, parallel processing speed up was measured on the macrocell placement function. 34 folds speed up was realized. (NEDO)

  9. Process development of human multipotent stromal cell microcarrier culture using an automated high‐throughput microbioreactor

    Science.gov (United States)

    Hanga, Mariana P.; Heathman, Thomas R. J.; Coopman, Karen; Nienow, Alvin W.; Williams, David J.; Hewitt, Christopher J.

    2017-01-01

    ABSTRACT Microbioreactors play a critical role in process development as they reduce reagent requirements and can facilitate high‐throughput screening of process parameters and culture conditions. Here, we have demonstrated and explained in detail, for the first time, the amenability of the automated ambr15 cell culture microbioreactor system for the development of scalable adherent human mesenchymal multipotent stromal/stem cell (hMSC) microcarrier culture processes. This was achieved by first improving suspension and mixing of the microcarriers and then improving cell attachment thereby reducing the initial growth lag phase. The latter was achieved by using only 50% of the final working volume of medium for the first 24 h and using an intermittent agitation strategy. These changes resulted in >150% increase in viable cell density after 24 h compared to the original process (no agitation for 24 h and 100% working volume). Using the same methodology as in the ambr15, similar improvements were obtained with larger scale spinner flask studies. Finally, this improved bioprocess methodology based on a serum‐based medium was applied to a serum‐free process in the ambr15, resulting in >250% increase in yield compared to the serum‐based process. At both scales, the agitation used during culture was the minimum required for microcarrier suspension, NJS. The use of the ambr15, with its improved control compared to the spinner flask, reduced the coefficient of variation on viable cell density in the serum containing medium from 7.65% to 4.08%, and the switch to serum free further reduced these to 1.06–0.54%, respectively. The combination of both serum‐free and automated processing improved the reproducibility more than 10‐fold compared to the serum‐based, manual spinner flask process. The findings of this study demonstrate that the ambr15 microbioreactor is an effective tool for bioprocess development of hMSC microcarrier cultures and that a combination

  10. In vitro multiplication of Colocasia esculenta clone ‘INIVIT MC-2001’ in semi-automated culture systems

    Directory of Open Access Journals (Sweden)

    Diosdada Galvez Guerra

    2014-04-01

    Full Text Available One alternative to increase yields in the taro (Colocasia esculenta culture, could be the application of biotechnological methods, especially semi-automated systems. The new clone ‘INIVIT MC-2001’ with high productive potential, have low multiplication coefficients in conventional seed production systems. Therefore, the aim of this paper was to determine the effect of semi-automated systems on the in vitro multiplication of taro ‘INIVIT MC-2001’. The effect of three semi-automated systems culture was studied: temporary Immersion system, immersion system with constant aeration to the culture medium and constant immersion system with aeration to the inner atmosphere of the culture recipient. The type of semi-automated culture system influenced the multiplication coefficient, the best results were obtained with the temporary immersion system (8.46 and the immersion system with constant aeration of the culture medium (8.42 which are alternatives for increasing the multiplication coefficient in clone ‘INIVIT MC-2001’. Key words: in vitro culture, taro, temporary immersion

  11. Development of an automated chip culture system with integrated on-line monitoring for maturation culture of retinal pigment epithelial cells

    Directory of Open Access Journals (Sweden)

    Mee-Hae Kim

    2017-10-01

    Full Text Available In cell manufacturing, the establishment of a fully automated, microfluidic, cell culture system that can be used for long-term cell cultures, as well as for process optimization is highly desirable. This study reports the development of a novel chip bioreactor system that can be used for automated long-term maturation cultures of retinal pigment epithelial (RPE cells. The system consists of an incubation unit, a medium supply unit, a culture observation unit, and a control unit. In the incubation unit, the chip contains a closed culture vessel (2.5 mm diameter, working volume 9.1 μL, which can be set to 37 °C and 5% CO2, and uses a gas-permeable resin (poly- dimethylsiloxane as the vessel wall. RPE cells were seeded at 5.0 × 104 cells/cm2 and the medium was changed every day by introducing fresh medium using the medium supply unit. Culture solutions were stored either in the refrigerator or the freezer, and fresh medium was prepared before any medium change by warming to 37 °C and mixing. Automated culture was allowed to continue for 30 days to allow maturation of the RPE cells. This chip culture system allows for the long-term, bubble-free, culture of RPE cells, while also being able to observe cells in order to elucidate their cell morphology or show the presence of tight junctions. This culture system, along with an integrated on-line monitoring system, can therefore be applied to long-term cultures of RPE cells, and should contribute to process control in RPE cell manufacturing.

  12. Identifying and Quantifying Cultural Factors That Matter to the IT Workforce: An Approach Based on Automated Content Analysis

    DEFF Research Database (Denmark)

    Schmiedel, Theresa; Müller, Oliver; Debortoli, Stefan

    2016-01-01

    builds on 112,610 online reviews of Fortune 500 IT companies collected from Glassdoor, an online platform on which current and former employees can anonymously review companies and their management. We perform an automated content analysis to identify cultural factors that employees emphasize...

  13. Automated slice-specific simultaneous z-shim method for reducing B1 inhomogeneity and susceptibility-induced signal loss with parallel transmission at 3T.

    Science.gov (United States)

    Schneider, Rainer; Boada, Fernando; Haueisen, Jens; Pfeuffer, Josef

    2015-10-01

    Through-plane susceptibility-induced signal loss in gradient recalled echo (GRE)-based sequences can considerably impair both the clinical diagnosis and functional analysis of certain brain areas. In this work, a fully automated simultaneous z-shim approach is proposed on the basis of parallel transmit (pTX) to reduce those signal dropouts at 3T. The approach uses coil-specific time-delayed excitations to impose a z-shim phase. It was extended toward B1 inhomogeneity mitigation and adequate slice-specific signal-dephasing cancellation on the basis of the prevailing B0 and B1 spatial information. Local signal recovery level and image quality preservation were analyzed using multi-slice FLASH experiments in humans and compared to the standard excitation. Spatial blood-oxygen-level-dependent (BOLD) activation coverage was further compared in breath-hold functional MRI. The pTX z-shim approach recovered approximately 47% of brain areas affected by signal loss in standard excitation images across all subjects. At the same time, B1 shading effects could be substantially reduced. In these areas, BOLD activation coverage could be also increased by approximately 57%. The proposed fully automated pTX z-shim method enables time-efficient and robust signal recovery in GRE-based sequences on a clinical scanner using two standard whole-body transmit coils. © 2014 Wiley Periodicals, Inc.

  14. Cultural Heritage: An example of graphical documentation with automated photogrammetric systems

    Science.gov (United States)

    Giuliano, M. G.

    2014-06-01

    In the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used, in particular for the study and for the documentation of the ancient ruins. This work has been carried out during the PhD cycle that was produced the "Carta Archeologica del territorio intorno al monte Massico". The study suggests the archeological documentation of the mausoleum "Torre del Ballerino" placed in the south-west area of Falciano del Massico, along the Via Appia. The graphic documentation has been achieved by using photogrammetric system (Image Based Modeling) and by the classical survey with total station, Nikon Nivo C. The data acquisition was carried out through digital camera Canon EOS 5D Mark II with Canon EF 17-40 mm f/4L USM @ 20 mm with images snapped in RAW and corrected in Adobe Lightroom. During the data processing, the camera calibration and orientation was carried out by the software Agisoft Photoscans and the final result has allowed to achieve a scaled 3D model of the monument, imported in software MeshLab for the different view. Three orthophotos in jpg format were extracted by the model, and then were imported in AutoCAD obtaining façade's surveys.

  15. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    Science.gov (United States)

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  16. An engineered approach to stem cell culture: automating the decision process for real-time adaptive subculture of stem cells.

    Directory of Open Access Journals (Sweden)

    Dai Fei Elmer Ker

    Full Text Available Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and objectively determine the appropriate time for cell passaging. Optimal stem cell culturing that maintains cell pluripotency while maximizing cell yields will be especially important for efficient, cost-effective stem cell-based therapies. Toward this goal we developed a real-time computer vision-based system that monitors the degree of cell confluency with a precision of 0.791±0.031 and recall of 0.559±0.043. The system consists of an automated phase-contrast time-lapse microscope and a server. Multiple dishes are sequentially imaged and the data is uploaded to the server that performs computer vision processing, predicts when cells will exceed a pre-defined threshold for optimal cell confluency, and provides a Web-based interface for remote cell culture monitoring. Human operators are also notified via text messaging and e-mail 4 hours prior to reaching this threshold and immediately upon reaching this threshold. This system was successfully used to direct the expansion of a paradigm stem cell population, C2C12 cells. Computer-directed and human-directed control subcultures required 3 serial cultures to achieve the theoretical target cell yield of 50 million C2C12 cells and showed no difference for myogenic and osteogenic differentiation. This automated vision-based system has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control, and it could be integrated with current and

  17. Use of the extended parallel processing model to evaluate culturally relevant kernicterus messages.

    Science.gov (United States)

    Russell, Jessica C; Smith, Sandi; Novales, Wilma; Massi Lindsey, Lisa L; Hanson, Joseph

    2013-01-01

    Kernicterus is a serious but easily preventable disease in newborns that is not well-known even by some health care professionals. This study evaluated a parent guide and poster on kernicterus awareness and prevention generated by the Centers for Disease Control and Prevention. The extended parallel processing model was used as a framework for creating the interview protocol and analyzing the results. In-depth interviews were conducted with four parents and six health care personnel of different ethnicities to evaluate the materials. Content for the parent guide and poster was held constant, but photos were varied according to the ethnicity of the baby (white, African American, or Hispanic) and the language in which the interviews were conducted (English and Spanish). The parent guide was evaluated positively, but reactions to the poster were varied. The consensus was that the poster drew more attention than the pocket guide but lacked sufficient information about what jaundice is or how to treat it, while the pocket guide provided information, especially with regard to efficacy. The extended parallel processing model claims that when efficacy is equal to or higher than perceived threat, respondents should engage in recommended responses, which was the general finding from these interviews. Recommendations for improvements of the materials are presented. The focus on different ethnicities in the materials was perceived as unnecessary and potentially counter-productive. Both parents and health care professionals mentioned the lack of information regarding treatment. Providing information on the length and effectiveness of treatment for jaundice and kernicterus might increase efficacy in averting the threat in both conditions. Copyright © 2013 National Association of Pediatric Nurse Practitioners. Published by Mosby, Inc. All rights reserved.

  18. Semi-automated relative quantification of cell culture contamination with mycoplasma by Photoshop-based image analysis on immunofluorescence preparations.

    Science.gov (United States)

    Kumar, Ashok; Yerneni, Lakshmana K

    2009-01-01

    Mycoplasma contamination in cell culture is a serious setback for the cell-culturist. The experiments undertaken using contaminated cell cultures are known to yield unreliable or false results due to various morphological, biochemical and genetic effects. Earlier surveys revealed incidences of mycoplasma contamination in cell cultures to range from 15 to 80%. Out of a vast array of methods for detecting mycoplasma in cell culture, the cytological methods directly demonstrate the contaminating organism present in association with the cultured cells. In this investigation, we report the adoption of a cytological immunofluorescence assay (IFA), in an attempt to obtain a semi-automated relative quantification of contamination by employing the user-friendly Photoshop-based image analysis. The study performed on 77 cell cultures randomly collected from various laboratories revealed mycoplasma contamination in 18 cell cultures simultaneously by IFA and Hoechst DNA fluorochrome staining methods. It was observed that the Photoshop-based image analysis on IFA stained slides was very valuable as a sensitive tool in providing quantitative assessment on the extent of contamination both per se and in comparison to cellularity of cell cultures. The technique could be useful in estimating the efficacy of anti-mycoplasma agents during decontaminating measures.

  19. SOFTWARE FOR DESIGNING PARALLEL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    M. K. Bouza

    2017-01-01

    Full Text Available The object of research is the tools to support the development of parallel programs in C/C ++. The methods and software which automates the process of designing parallel applications are proposed.

  20. Validation of the BacT/ALERT®3D automated culture system for the detection of microbial contamination of epithelial cell culture medium.

    Science.gov (United States)

    Plantamura, E; Huyghe, G; Panterne, B; Delesalle, N; Thépot, A; Reverdy, M E; Damour, O; Auxenfans, Céline

    2012-08-01

    Living tissue engineering for regenerative therapy cannot withstand the usual pharmacopoeia methods of purification and terminal sterilization. Consequently, these products must be manufactured under aseptic conditions at microbiologically controlled environment facilities. This study was proposed to validate BacT/ALERT(®)3D automated culture system for microbiological control of epithelial cell culture medium (ECCM). Suspensions of the nine microorganisms recommended by the European Pharmacopoeia (Chap. 2.6.27: "Microbiological control of cellular products"), plus one species from oral mucosa and two negative controls with no microorganisms were prepared in ECCM. They were inoculated in FA (anaerobic) and SN (aerobic) culture bottles (Biomérieux, Lyon, France) and incubated in a BacT/ALERT(®)3D automated culture system. For each species, five sets of bottles were inoculated for reproducibility testing: one sample was incubated at the French Health Products Agency laboratory (reference) and the four others at Cell and Tissue Bank of Lyon, France. The specificity of the positive culture bottles was verified by Gram staining and then subcultured to identify the microorganism grown. The BacT/ALERT(®)3D system detected all the inoculated microorganisms in less than 2 days except Propionibacterium acnes which was detected in 3 days. In conclusion, this study demonstrates that the BacT/ALERT(®)3D system can detect both aerobic and anaerobic bacterial and fungal contamination of an epithelial cell culture medium consistent with the European Pharmacopoeia chapter 2.6.27 recommendations. It showed the specificity, sensitivity, and precision of the BacT/ALERT(®)3D method, since all the microorganisms seeded were detected in both sites and the uncontaminated medium ECCM remained negative at 7 days.

  1. Parallel experimental design and multivariate analysis provides efficient screening of cell culture media supplements to improve biosimilar product quality.

    Science.gov (United States)

    Brühlmann, David; Sokolov, Michael; Butté, Alessandro; Sauer, Markus; Hemberger, Jürgen; Souquet, Jonathan; Broly, Hervé; Jordan, Martin

    2017-07-01

    Rational and high-throughput optimization of mammalian cell culture media has a great potential to modulate recombinant protein product quality. We present a process design method based on parallel design-of-experiment (DoE) of CHO fed-batch cultures in 96-deepwell plates to modulate monoclonal antibody (mAb) glycosylation using medium supplements. To reduce the risk of losing valuable information in an intricate joint screening, 17 compounds were separated into five different groups, considering their mode of biological action. The concentration ranges of the medium supplements were defined according to information encountered in the literature and in-house experience. The screening experiments produced wide glycosylation pattern ranges. Multivariate analysis including principal component analysis and decision trees was used to select the best performing glycosylation modulators. Subsequent D-optimal quadratic design with four factors (three promising compounds and temperature shift) in shake tubes confirmed the outcome of the selection process and provided a solid basis for sequential process development at a larger scale. The glycosylation profile with respect to the specifications for biosimilarity was greatly improved in shake tube experiments: 75% of the conditions were equally close or closer to the specifications for biosimilarity than the best 25% in 96-deepwell plates. Biotechnol. Bioeng. 2017;114: 1448-1458. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Diagnostic accuracy of uriSed automated urine microscopic sediment analyzer and dipstick parameters in predicting urine culture test results.

    Science.gov (United States)

    Huysal, Kağan; Budak, Yasemin U; Karaca, Ayse Ulusoy; Aydos, Murat; Kahvecioğlu, Serdar; Bulut, Mehtap; Polat, Murat

    2013-01-01

    Urinary tract infection (UTI) is one of the most common types of infection. Currently, diagnosis is primarily based on microbiologic culture, which is time- and labor-consuming. The aim of this study was to assess the diagnostic accuracy of urinalysis results from UriSed (77 Electronica, Budapest, Hungary), an automated microscopic image-based sediment analyzer, in predicting positive urine cultures. We examined a total of 384 urine specimens from hospitalized patients and outpatients attending our hospital on the same day for urinalysis, dipstick tests and semi-quantitative urine culture. The urinalysis results were compared with those of conventional semiquantitative urine culture. Of 384 urinary specimens, 68 were positive for bacteriuria by culture, and were thus considered true positives. Comparison of these results with those obtained from the UriSed analyzer indicated that the analyzer had a specificity of 91.1%, a sensitivity of 47.0%, a positive predictive value (PPV) of 53.3% (95% confidence interval (CI) = 40.8-65.3), and a negative predictive value (NPV) of 88.8% (95% CI = 85.0-91.8%). The accuracy was 83.3% when the urine leukocyte parameter was used, 76.8% when bacteriuria analysis of urinary sediment was used, and 85.1% when the bacteriuria and leukocyturia parameters were combined. The presence of nitrite was the best indicator of culture positivity (99.3% specificity) but had a negative likelihood ratio of 0.7, indicating that it was not a reliable clinical test. Although the specificity of the UriSed analyzer was within acceptable limits, the sensitivity value was low. Thus, UriSed urinalysis resuIts do not accurately predict the outcome of culture.

  3. Design and Performance of an Automated Bioreactor for Cell Culture Experiments in a Microgravity Environment

    Directory of Open Access Journals (Sweden)

    Youn-Kyu Kim

    2015-03-01

    Full Text Available In this paper, we describe the development of a bioreactor for a cell-culture experiment on the International Space Station (ISS. The bioreactor is an experimental device for culturing mouse muscle cells in a microgravity environment. The purpose of the experiment was to assess the impact of microgravity on the muscles to address the possibility of longterm human residence in space. After investigation of previously developed bioreactors, and analysis of the requirements for microgravity cell culture experiments, a bioreactor design is herein proposed that is able to automatically culture 32 samples simultaneously. This reactor design is capable of automatic control of temperature, humidity, and culture-medium injection rate; and satisfies the interface requirements of the ISS. Since bioreactors are vulnerable to cell contamination, the medium-circulation modules were designed to be a completely replaceable, in order to reuse the bioreactor after each experiment. The bioreactor control system is designed to circulate culture media to 32 culture chambers at a maximum speed of 1 ml/min, to maintain the temperature of the reactor at 36±1°C, and to keep the relative humidity of the reactor above 70%. Because bubbles in the culture media negatively affect cell culture, a de-bubbler unit was provided to eliminate such bubbles. A working model of the reactor was built according to the new design, to verify its performance, and was used to perform a cell culture experiment that confirmed the feasibility of this device.

  4. Design and Performance of an Automated Bioreactor for Cell Culture Experiments in a Microgravity Environment

    Science.gov (United States)

    Kim, Youn-Kyu; Park, Seul-Hyun; Lee, Joo-Hee; Choi, Gi-Hyuk

    2015-03-01

    In this paper, we describe the development of a bioreactor for a cell-culture experiment on the International Space Station (ISS). The bioreactor is an experimental device for culturing mouse muscle cells in a microgravity environment. The purpose of the experiment was to assess the impact of microgravity on the muscles to address the possibility of longterm human residence in space. After investigation of previously developed bioreactors, and analysis of the requirements for microgravity cell culture experiments, a bioreactor design is herein proposed that is able to automatically culture 32 samples simultaneously. This reactor design is capable of automatic control of temperature, humidity, and culture-medium injection rate; and satisfies the interface requirements of the ISS. Since bioreactors are vulnerable to cell contamination, the medium-circulation modules were designed to be a completely replaceable, in order to reuse the bioreactor after each experiment. The bioreactor control system is designed to circulate culture media to 32 culture chambers at a maximum speed of 1 ml/min, to maintain the temperature of the reactor at 36°C, and to keep the relative humidity of the reactor above 70%. Because bubbles in the culture media negatively affect cell culture, a de-bubbler unit was provided to eliminate such bubbles. A working model of the reactor was built according to the new design, to verify its performance, and was used to perform a cell culture experiment that confirmed the feasibility of this device.

  5. Surveillance cultures of samples obtained from biopsy channels and automated endoscope reprocessors after high-level disinfection of gastrointestinal endoscopes

    Directory of Open Access Journals (Sweden)

    Chiu King-Wah

    2012-09-01

    Full Text Available Abstract Background The instrument channels of gastrointestinal (GI endoscopes may be heavily contaminated with bacteria even after high-level disinfection (HLD. The British Society of Gastroenterology guidelines emphasize the benefits of manually brushing endoscope channels and using automated endoscope reprocessors (AERs for disinfecting endoscopes. In this study, we aimed to assess the effectiveness of decontamination using reprocessors after HLD by comparing the cultured samples obtained from biopsy channels (BCs of GI endoscopes and the internal surfaces of AERs. Methods We conducted a 5-year prospective study. Every month random consecutive sampling was carried out after a complete reprocessing cycle; 420 rinse and swabs samples were collected from BCs and internal surface of AERs, respectively. Of the 420 rinse samples collected from the BC of the GI endoscopes, 300 were obtained from the BCs of gastroscopes and 120 from BCs of colonoscopes. Samples were collected by flushing the BCs with sterile distilled water, and swabbing the residual water from the AERs after reprocessing. These samples were cultured to detect the presence of aerobic and anaerobic bacteria and mycobacteria. Results The number of culture-positive samples obtained from BCs (13.6%, 57/420 was significantly higher than that obtained from AERs (1.7%, 7/420. In addition, the number of culture-positive samples obtained from the BCs of gastroscopes (10.7%, 32/300 and colonoscopes (20.8%, 25/120 were significantly higher than that obtained from AER reprocess to gastroscopes (2.0%, 6/300 and AER reprocess to colonoscopes (0.8%, 1/120. Conclusions Culturing rinse samples obtained from BCs provides a better indication of the effectiveness of the decontamination of GI endoscopes after HLD than culturing the swab samples obtained from the inner surfaces of AERs as the swab samples only indicate whether the AERs are free from microbial contamination or not.

  6. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration.

    Science.gov (United States)

    Forment, Javier; Gilabert, Francisco; Robles, Antonio; Conejero, Vicente; Nuez, Fernando; Blanca, Jose M

    2008-01-07

    Expressed sequence tag (EST) collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http://bioinf.comav.upv.es/est2uni. This site also provides detailed instructions

  7. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    Directory of Open Access Journals (Sweden)

    Nuez Fernando

    2008-01-01

    Full Text Available Abstract Background Expressed sequence tag (EST collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http

  8. The performance of fully automated urine analysis results for predicting the need of urine culture test

    Directory of Open Access Journals (Sweden)

    Hatice Yüksel

    2014-06-01

    Full Text Available Objectives: Urinalysis and urine culture are most common tests for diagnosis of urinary tract infections. The aim of our study is to examine the diagnostic performance of urine analysis and the role of urine analysis to determine the requirements for urine culture. Methods: Urine culture and urine analysis results of 362 patients were retrospectively analyzed. Culture results were taken as a reference for chemical and microscopic examination of urine and diagnostic accuracy of the test parameters, that may be a marker for urinary tract infection, and the performance of urine analysis were calculated for predicting the urine culture requirements. Results: A total of 362 urine culture results of patients were evaluated and 67% of them were negative. The results of leukocyte esterase and nitrite in chemical analysis and leukocytes and bacteria in microscopic analysis were normal in 50.4% of culture negative urines. In diagnostic accuracy calculations, leukocyte esterase (86.1% and microscopy leukocytes (88.0% were found with high sensitivity, nitrite (95.4% and bacteria (86.6% were found with high specificity. The area under the curve was calculated as 0.852 in ROC analysis for microscopic examination for leukocytes. Conclusion: Full-automatic urine devices can provide sufficient diagnostic accuracy for urine analysis. The evaluation of urine analysis results in an effective way can predict the necessity for urine culture requests and especially may contribute to a reduction in the work load and cost. J Clin Exp Invest 2014; 5 (2: 286-289

  9. ATTEMPTS TO AUTOMATE THE PROCESS OF GENERATION OF ORTHOIMAGES OF OBJECTS OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    J. S. Markiewicz

    2015-02-01

    Full Text Available At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. The orthoimage is a cartometric form of photographic presentation of information in the two-dimensional reference system. The paper will discuss the issue of automation of the orthoimage generation basing on the TLS data and digital images. At present attempts are made to apply modern technologies not only for the needs of surveys, but also during the data processing. This paper will present attempts aiming at utilisation of appropriate algorithms and the author’s application for automatic generation of the projection plane, for the needs of acquisition of intensity orthoimages from the TLS data. Such planes are defined manually in the majority of popular TLS data processing applications. A separate issue related to the RGB image generation is the orientation of digital images in relation to scans. It is important, in particular in such cases when scans and photographs are not taken simultaneously. This paper will present experiments concerning the utilisation of the SIFT algorithm for automatic matching of intensity orthoimages of the intensity and digital (RGB photographs. Satisfactory results of the process of automation, as well as in relation to the quality of resulting orthoimages have been obtained.

  10. Rapid Prototyping of a Cyclic Olefin Copolymer Microfluidic Device for Automated Oocyte Culturing.

    Science.gov (United States)

    Berenguel-Alonso, Miguel; Sabés-Alsina, Maria; Morató, Roser; Ymbern, Oriol; Rodríguez-Vázquez, Laura; Talló-Parra, Oriol; Alonso-Chamarro, Julián; Puyol, Mar; López-Béjar, Manel

    2017-10-01

    Assisted reproductive technology (ART) can benefit from the features of microfluidic technologies, such as the automation of time-consuming labor-intensive procedures, the possibility to mimic in vivo environments, and the miniaturization of the required equipment. To date, most of the proposed approaches are based on polydimethylsiloxane (PDMS) as platform substrate material due to its widespread use in academia, despite certain disadvantages, such as the elevated cost of mass production. Herein, we present a rapid fabrication process for a cyclic olefin copolymer (COC) monolithic microfluidic device combining hot embossing-using a low-temperature cofired ceramic (LTCC) master-and micromilling. The microfluidic device was suitable for trapping and maturation of bovine oocytes, which were further studied to determine their ability to be fertilized. Furthermore, another COC microfluidic device was fabricated to store sperm and assess its quality parameters over time. The study herein presented demonstrates a good biocompatibility of the COC when working with gametes, and it exhibits certain advantages, such as the nonabsorption of small molecules, gas impermeability, and low fabrication costs, all at the prototyping and mass production scale, thus taking a step further toward fully automated microfluidic devices in ART.

  11. Detection of bacteria in platelet concentrates: comparison of broad-range real-time 16S rDNA polymerase chain reaction and automated culturing

    NARCIS (Netherlands)

    Mohammadi, Tamimount; Pietersz, Ruby N. I.; Vandenbroucke-Grauls, Christina M. J. E.; Savelkoul, Paul H. M.; Reesink, Henk W.

    2005-01-01

    BACKGROUND: Based on real-time polymerase chain reaction (PCR) technology, a broad-range 16S rDNA assay was validated and its performance was compared to that of an automated culture system to determine its usefulness for rapid routine screening of platelet concentrates (PCs). STUDY DESIGN AND

  12. Automated Flow Cytometry: An Alternative to Urine Culture in a Routine Clinical Microbiology Laboratory?

    Directory of Open Access Journals (Sweden)

    Patricia Mejuto

    2017-01-01

    Full Text Available The urine culture is the “gold standard” for the diagnosis of urinary tract infections (UTI but constitutes a significant workload in the routine clinical laboratory. Due to the high percentage of negative results, there is a need for an efficient screening method, with a high negative predictive value (NPV that could reduce the number of unnecessary culture tests. With the purpose of improving the efficiency of laboratory work, several methods for screening out the culture-negative samples have been developed, but none of them has shown adequate sensitivity (SE and high NPV. Many authors show data about the efficacy of flow cytometry in the routine clinical laboratory. The aim of this article is to review and discuss the current literature on the feasibility of urine flow cytometry (UFC and its utility as an alternative analytical technique in urinalysis.

  13. A large-scale automated method for hepatocyte isolation: effects on proliferation in culture.

    Science.gov (United States)

    Nieuwoudt, M J; Kreft, E; Olivier, B; Malfeld, S; Vosloo, J; Stegman, F; Kunneke, R; Van Wyk, A J; Van der Merwe, S W

    2005-01-01

    Large-scale sterile methods for isolating hepatocytes are desirable for the development of bioartificial liver support systems. In this study the traditional centrifuge method was compared with the use of a Baylor Rapid Autologous Transfusion (BRAT) machine for isolating large quantities of porcine hepatocytes. After isolating hepatocytes, the methods were evaluated in terms of cell viability and yield per liver, proliferation over 7 days, and the effects on the cell cycle using the trypan blue exclusion test, conventional phase-contrast light microscopy, the lactate to pyruvate ratio, the leakage of lactate dehydrogenase (LD) and aspartate aminotransferase (AST), lidocaine clearance, albumin production, and flow cytometry. With the centrifuge method the mean cell viability was 92.5%, while with the BRAT method the viability was 95.9%. The minimal cell yields with the BRAT procedure were 7.3 x 10(9) for 250-ml centrifuge bowls and 2.8 x 10(9) for 165-ml bowls, which compares well with that found by other authors. Because the same initial procedures were employed in both methods the total hepatocyte yield per liver was comparable. Flow cytometry confirmed that the proliferation of hepatocytes was facilitated by oxygenation during the isolation procedure. The recovery of hepatocytes in culture following isolation was similar after either method. Daily microscopic investigation indicated that cytoplasmic vacuolization and granularities were present after either procedure and these disappeared following 3-4 days of culturing. Flow cytometry indicated that the hepatocyte cell cycle was similar after either method; at 7 days the profile indicated that the cells were still proliferating. Trends in the lactate to pyruvate ratio and the leakage of LD and AST indicated that the functional polarity of hepatocytes was regained after approximately 3 days. Lidocaine clearance at 4 days indicated that the cytochrome P450 system was active, while significant albumin production was

  14. AUTOMATED VOXEL MODEL FROM POINT CLOUDS FOR STRUCTURAL ANALYSIS OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    G. Bitelli

    2016-06-01

    Full Text Available In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy that was hit by an earthquake in 2012.

  15. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    International Nuclear Information System (INIS)

    Hirata, So

    2003-01-01

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes common binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory[MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ)

  16. The Use of Two Culturing Methods in Parallel Reveals a High Prevalence and Diversity of Arcobacter spp. in a Wastewater Treatment Plant

    Directory of Open Access Journals (Sweden)

    Arturo Levican

    2016-01-01

    Full Text Available The genus Arcobacter includes species considered emerging food and waterborne pathogens. Despite Arcobacter has been linked to the presence of faecal pollution, few studies have investigated its prevalence in wastewater, and the only isolated species were Arcobacter butzleri and Arcobacter cryaerophilus. This study aimed to establish the prevalence of Arcobacter spp. at a WWTP using in parallel two culturing methods (direct plating and culturing after enrichment and a direct detection by m-PCR. In addition, the genetic diversity of the isolates was established using the ERIC-PCR genotyping method. Most of the wastewater samples (96.7% were positive for Arcobacter and a high genetic diversity was observed among the 651 investigated isolates that belonged to 424 different ERIC genotypes. However, only few strains persisted at different dates or sampling points. The use of direct plating in parallel with culturing after enrichment allowed recovering the species A. butzleri, A. cryaerophilus, Arcobacter thereius, Arcobacter defluvii, Arcobacter skirrowii, Arcobacter ellisii, Arcobacter cloacae, and Arcobacter nitrofigilis, most of them isolated for the first time from wastewater. The predominant species was A. butzleri, however, by direct plating predominated A. cryaerophilus. Therefore, the overall predominance of A. butzleri was a bias associated with the use of enrichment.

  17. Automated vector selection of SIVQ and parallel computing integration MATLAB TM : Innovations supporting large-scale and high-throughput image analysis studies

    Directory of Open Access Journals (Sweden)

    Jerome Cheng

    2011-01-01

    Full Text Available Introduction: Spatially invariant vector quantization (SIVQ is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector′s sensitivity and specificity properties (typically by reviewing a resultant heat map. In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. Methods: An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC transfer function, with each assessment resulting in an associated area-under-the-curve (AUC figure of merit. Results: Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an

  18. Reductions in self-reported stress and anticipatory heart rate with the use of a semi-automated parallel parking system.

    Science.gov (United States)

    Reimer, Bryan; Mehler, Bruce; Coughlin, Joseph F

    2016-01-01

    Drivers' reactions to a semi-autonomous technology for assisted parallel parking system were evaluated in a field experiment. A sample of 42 drivers balanced by gender and across three age groups (20-29, 40-49, 60-69) were given a comprehensive briefing, saw the technology demonstrated, practiced parallel parking 3 times each with and without the assistive technology, and then were assessed on an additional 3 parking events each with and without the technology. Anticipatory stress, as measured by heart rate, was significantly lower when drivers approached a parking space knowing that they would be using the assistive technology as opposed to manually parking. Self-reported stress levels following assisted parks were also lower. Thus, both subjective and objective data support the position that the assistive technology reduced stress levels in drivers who were given detailed training. It was observed that drivers decreased their use of turn signals when using the semi-autonomous technology, raising a caution concerning unintended lapses in safe driving behaviors that may occur when assistive technologies are used. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Structure_threader: An improved method for automation and parallelization of programs structure, fastStructure and MavericK on multicore CPU systems.

    Science.gov (United States)

    Pina-Martins, Francisco; Silva, Diogo N; Fino, Joana; Paulo, Octávio S

    2017-11-01

    Structure_threader is a program to parallelize multiple runs of genetic clustering software that does not make use of multithreading technology (structure, fastStructure and MavericK) on multicore computers. Our approach was benchmarked across multiple systems and displayed great speed improvements relative to the single-threaded implementation, scaling very close to linearly with the number of physical cores used. Structure_threader was compared to previous software written for the same task-ParallelStructure and StrAuto and was proven to be the faster (up to 25% faster) wrapper under all tested scenarios. Furthermore, Structure_threader can perform several automatic and convenient operations, assisting the user in assessing the most biologically likely value of 'K' via implementations such as the "Evanno," or "Thermodynamic Integration" tests and automatically draw the "meanQ" plots (static or interactive) for each value of K (or even combined plots). Structure_threader is written in python 3 and licensed under the GPLv3. It can be downloaded free of charge at https://github.com/StuntsPT/Structure_threader. © 2017 John Wiley & Sons Ltd.

  20. Organizational changes and automation: By means of a customer-oriented policy the so-called 'island culture' disappears: Part 2

    International Nuclear Information System (INIS)

    Van Gelder, J.W.

    1994-01-01

    Automation offers great opportunities in the efforts of energy utilities in the Netherlands to reorganize towards more customer-oriented businesses. However, automation in itself is not enough. First, the organizational structure has to be changed considerably. Various energy utilities have already started on it. The restructuring principle is the same everywhere, but the way it is implemented differs widely. In this article attention is paid to different customer information systems. These systems can put an end to the so-called island culture within the energy utility organizations. The systems discussed are IRD of Systema and RIVA of SAP (both German software businesses), and two Dutch systems: Numis-2000 of Multihouse and KIS/400 of NUON Info-Systemen

  1. Manual versus automated streaking system in clinical microbiology laboratory: Performance evaluation of Previ Isola for blood culture and body fluid samples.

    Science.gov (United States)

    Choi, Qute; Kim, Hyun Jin; Kim, Jong Wan; Kwon, Gye Cheol; Koo, Sun Hoe

    2018-01-04

    The process of plate streaking has been automated to improve routine workflow of clinical microbiology laboratories. Although there were many evaluation reports about the inoculation of various body fluid samples, few evaluations have been reported for blood. In this study, we evaluated the performance of automated inoculating system, Previ Isola for various routine clinical samples including blood. Blood culture, body fluid, and urine samples were collected. All samples were inoculated on both sheep blood agar plate (BAP) and MacConkey agar plate (MCK) using Previ Isola and manual method. We compared two methods in aspect of quality and quantity of cultures, and sample processing time. To ensure objective colony counting, an enumeration reading reference was made through a preliminary experiment. A total of 377 nonduplicate samples (102 blood culture, 203 urine, 72 body fluid) were collected and inoculated. The concordance rate of quality was 100%, 97.0%, and 98.6% in blood, urine, and other body fluids, respectively. In quantitative aspect, it was 98.0%, 97.0%, and 95.8%, respectively. The Previ Isola took a little longer to inoculate the specimen than manual method, but the hands-on time decreased dramatically. The shortened hands-on time using Previ Isola was about 6 minutes per 10 samples. We demonstrated that the Previ Isola showed high concordance with the manual method in the inoculation of various body fluids, especially in blood culture sample. The use of Previ Isola in clinical microbiology laboratories is expected to save considerable time and human resources. © 2018 Wiley Periodicals, Inc.

  2. Parallel rendering

    Science.gov (United States)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  3. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  4. Arginine as an eluent for automated on-line Protein A/size exclusion chromatographic analysis of monoclonal antibody aggregates in cell culture.

    Science.gov (United States)

    Wang, Sean; Raghani, Anil

    2014-01-15

    An automated two-dimensional method using Protein A chromatography followed by size exclusion HPLC was developed for analysis of aggregates of monoclonal antibodies in mammalian cell culture samples. The method development was intended to address the analysis of IgG2 monoclonal antibody products that are particularly prone to aggregation at pHProtein A chromatography. In addition, the arginine solution is a compatible mobile phase for the analysis of these samples by size exclusion HPLC, separating aggregates from the monomer of the monoclonal antibody. The effect of arginine concentration in the eluent on parameters such as protein recovery from Protein A chromatography and resolution of aggregates from the monomer are reported. The developed method was shown to provide accuracy of reported aggregates greater than 98%, and intermediate precision of 4.4% RSD. The method limit of quantitation for aggregates was determined to be 0.1%. Application of the method is demonstrated for analysis of aggregates in cell culture samples to aid in the development of cell culture conditions for the production of antibodies. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  6. Culture expansion of adipose derived stromal cells. A closed automated Quantum Cell Expansion System compared with manual flask-based culture

    DEFF Research Database (Denmark)

    Haack-Sørensen, Mandana; Follin, Bjarke; Juhl, Morten

    2016-01-01

    Background: Adipose derived stromal cells (ASCs) are a rich and convenient source of cells for clinical regenerative therapeutic approaches. However, applications of ASCs often require cell expansion to reach the needed dose. In this study, cultivation of ASCs from stromal vascular fraction (SVF......) over two passages in the automated and functionally closed Quantum Cell Expansion System (Quantum system) is compared with traditional manual cultivation. Methods: Stromal vascular fraction was isolated from abdominal fat, suspended in α-MEM supplemented with 10% Fetal Bovine Serum and seeded......, and endotoxins, in addition to the assessment of cell counts, viability, immunophenotype, and differentiation potential. Results: The viability of ASCs passage 0 (P0) and P1 was above 96%, regardless of cultivation in flasks or Quantum system. Expression of surface markers and differentiation potential...

  7. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  8. New Low-Cost Automated Processing of Digital Photos for Documentation and Visualisation of the Cultural Heritage

    Directory of Open Access Journals (Sweden)

    Karel Pavelka

    2011-12-01

    Full Text Available The 3D scanning is nowadays a commonly used and fast technique. A variety of type’s 3D scanners is available, with different precision and aim of using. From normal user´s point of view, all the instruments are very expensive and need special software for processing the measured data. Also transportation of 3D scanners – poses a problem, because duty or special taxes for transport out of the EU have to be paid and there is a risk of damage to dismantling of these very expensive instruments and calibration will be needed. For this reason, a simple and automated technique using close range photogrammetry documentation is very important. This paper describes our experience with the software solution for automatic image correlation techniques and their utilization in close range photogrammetry and historical objects documentation. Non-photogrammetrical approach, which often gives very good outputs, is described the last part of this contribution. An image correlation proceeds well only on appropriate parts of documented objects and depends on the number of images, their overlapping and configuration, radiometrical quality of photos, and surface texture.

  9. Automated Storage Retrieval System (ASRS) Role Towards Achievement of Safety Objective and Safety Culture in Radioactive Storage Facilities

    International Nuclear Information System (INIS)

    Mohamad Hakiman Mohd Yusoff; Nurul Wahida Ahmad Khairuddin; Nik Marzukee Nik Ibrahim; Mat Bakar Mahusin; Muhammad, Z.A.; Nur Azna Mahmud; Norfazlina Zainal Abidin

    2012-01-01

    Waste Technology Development Centre (WasTeC) has been awarded with quality management system ISO 9001:2000 in June 2004 or now known as ISO 9001:2008. The scope of the unit's ISO certification is radioactive waste management and storage of radioactive material. To meet the objectives and requirements ISO 9001:2008, WasTeC has started a project known as Automated Storage and Retrieval System (ASRS). ASRS is a computing controlled method for automatically depositing and retrieving waste from defined locations. The system is used to replace the existing process of storage and retrieval of radioactive waste at storage facility at block 33.The main objective of this project is to reduced the radiation exposure to the worker and potential forklift accident occur during storage and retrieval of the radioactive waste. By using the ASRS system, WasTeC/ Nuclear Malaysia can provide a safe storage of radioactive waste and the use of this system can eliminate the repeat handling and can improve productivity. (author)

  10. A new assay for cytotoxic lymphocytes, based on a radioautographic readout of 111In release, suitable for rapid, semi-automated assessment of limit-dilution cultures

    International Nuclear Information System (INIS)

    Shortman, K.; Wilson, A.

    1981-01-01

    A new assay for cytotoxic T lymphocytes is described, of general application, but particularly suitable for rapid, semi-automated assessment of multiple microculture tests. Target cells are labelled with high efficiency and to high specific activity with the oxine chelate of 111 indium. After a 3-4 h incubation of test cells with 5 X 10 3 labelled target cells in V wells of microtitre trays, samples of the supernatant are spotted on paper (5 μl) or transferred to soft-plastic U wells (25-50 μl) and the 111 In release assessed by radioautography. Overnight exposure of X-ray film with intensifying screens at -70 0 C gives an image which is an intense dark spot for maximum release, a barely visible darkening with the low spontaneous release, and a definite positive with 10% specific lysis. The degree of film darkening, which can be quantitated by microdensitometry, shows a linear relationship with cytotoxic T lymphocyte dose up to the 40% lysis level. The labelling intensity and sensitivity can be adjusted over a wide range, allowing a single batch of the short half-life isotope to serve for 2 weeks. The 96 assays from a single tray are developed simultaneously on a single small sheet of film. Many trays can be processed together, and handling is rapid if 96-channel automatic pipettors are used. The method allows rapid visual scanning for positive and negative limit dilution cultures in cytotoxic T cell precursor frequency and specificity studies. In addition, in conjunction with an automated densitometer designed to scan microtitre trays, the method provides an efficient alternative to isotope counting in routine cytotoxic assays. (Auth.)

  11. Comparison of automated BAX PCR and standard culture methods for detection of Listeria monocytogenes in blue Crabmeat (Callinectus sapidus) and blue crab processing plants.

    Science.gov (United States)

    Pagadala, Sivaranjani; Parveen, Salina; Schwarz, Jurgen G; Rippen, Thomas; Luchansky, John B

    2011-11-01

    This study compared the automated BAX PCR with the standard culture method (SCM) to detect Listeria monocytogenes in blue crab processing plants. Raw crabs, crabmeat, and environmental sponge samples were collected monthly from seven processing plants during the plant operating season, May through November 2006. For detection of L. monocytogenes in raw crabs and crabmeat, enrichment was performed in Listeria enrichment broth, whereas for environmental samples, demi-Fraser broth was used, and then plating on both Oxford agar and L. monocytogenes plating medium was done. Enriched samples were also analyzed by BAX PCR. A total of 960 samples were examined; 59 were positive by BAX PCR and 43 by SCM. Overall, there was no significant difference (P ≤ 0.05) between the methods for detecting the presence of L. monocytogenes in samples collected from crab processing plants. Twenty-two and 18 raw crab samples were positive for L. monocytogenes by SCM and BAX PCR, respectively. Twenty and 32 environmental samples were positive for L. monocytogenes by SCM and BAX PCR, respectively, whereas only one and nine finished products were positive. The sensitivities of BAX PCR for detecting L. monocytogenes in raw crabs, crabmeat, and environmental samples were 59.1, 100, and 60%, respectively. The results of this study indicate that BAX PCR is as sensitive as SCM for detecting L. monocytogenes in crabmeat, but more sensitive than SCM for detecting this bacterium in raw crabs and environmental samples.

  12. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  13. Parallel Lines

    Directory of Open Access Journals (Sweden)

    James G. Worner

    2017-05-01

    Full Text Available James Worner is an Australian-based writer and scholar currently pursuing a PhD at the University of Technology Sydney. His research seeks to expose masculinities lost in the shadow of Australia’s Anzac hegemony while exploring new opportunities for contemporary historiography. He is the recipient of the Doctoral Scholarship in Historical Consciousness at the university’s Australian Centre of Public History and will be hosted by the University of Bologna during 2017 on a doctoral research writing scholarship.   ‘Parallel Lines’ is one of a collection of stories, The Shapes of Us, exploring liminal spaces of modern life: class, gender, sexuality, race, religion and education. It looks at lives, like lines, that do not meet but which travel in proximity, simultaneously attracted and repelled. James’ short stories have been published in various journals and anthologies.

  14. Parallel Programming Environment for OpenMP

    Directory of Open Access Journals (Sweden)

    Insung Park

    2001-01-01

    Full Text Available We present our effort to provide a comprehensive parallel programming environment for the OpenMP parallel directive language. This environment includes a parallel programming methodology for the OpenMP programming model and a set of tools (Ursa Minor and InterPol that support this methodology. Our toolset provides automated and interactive assistance to parallel programmers in time-consuming tasks of the proposed methodology. The features provided by our tools include performance and program structure visualization, interactive optimization, support for performance modeling, and performance advising for finding and correcting performance problems. The presented evaluation demonstrates that our environment offers significant support in general parallel tuning efforts and that the toolset facilitates many common tasks in OpenMP parallel programming in an efficient manner.

  15. Computer automation and artificial intelligence

    International Nuclear Information System (INIS)

    Hasnain, S.B.

    1992-01-01

    Rapid advances in computing, resulting from micro chip revolution has increased its application manifold particularly for computer automation. Yet the level of automation available, has limited its application to more complex and dynamic systems which require an intelligent computer control. In this paper a review of Artificial intelligence techniques used to augment automation is presented. The current sequential processing approach usually adopted in artificial intelligence has succeeded in emulating the symbolic processing part of intelligence, but the processing power required to get more elusive aspects of intelligence leads towards parallel processing. An overview of parallel processing with emphasis on transputer is also provided. A Fuzzy knowledge based controller for amination drug delivery in muscle relaxant anesthesia on transputer is described. 4 figs. (author)

  16. Cultural

    Science.gov (United States)

    Wilbur F. LaPage

    1971-01-01

    A critical look at outdoor recreation research and some underlying premises. The author focuses on the concept of culture as communication and how it influences our perception of problems and our search for solutions. Both outdoor recreation and science are viewed as subcultures that have their own bodies of mythology, making recreation problems more difficult to...

  17. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  18. Enhancing data parallel aplications with task parallelism

    OpenAIRE

    Fernández, Jacqueline; Guerrero, Roberto A.; Piccoli, María Fabiana; Printista, Alicia Marcela; Villalobos, M.

    2001-01-01

    Most parallel applications contain data parallelism and almost all discussion of its solutions has limited to the simplest and least expressive form: flat data parallelism. Several generalization of the flat data parallel model have been proposed because a large number of those applications need a combination of task and data parallelism to represent their natural computation structure and to achieve good performance in their results. Their aim is to allow the capability of combining the easi...

  19. cultural

    Directory of Open Access Journals (Sweden)

    Irene Kreutz

    2006-01-01

    Full Text Available Es un estudio cualitativo que adoptó como referencial teorico-motodológico la antropología y la etnografía. Presenta las experiencias vivenciadas por mujeres de una comunidad en el proceso salud-enfermedad, con el objetivo de comprender los determinantes sócio-culturales e históricos de las prácticas de prevención y tratamiento adoptados por el grupo cultural por medio de la entrevista semi-estructurada. Los temas que emergieron fueron: la relación entre la alimentación y lo proceso salud-enfermedad, las relaciones con el sistema de salud oficial y el proceso salud-enfermedad y lo sobrenatural. Los dados revelaron que los moradores de la comunidad investigada tienen un modo particular de explicar sus procedimientos terapéuticos. Consideramos que es papel de los profesionales de la salud en sus prácticas, la adopción de abordajes o enfoques que consideren al individuo en su dimensión sócio-cultural e histórica, considerando la enorme diversidad cultural en nuestro país.

  20. Automated fingerprint identification system

    International Nuclear Information System (INIS)

    Bukhari, U.A.; Sheikh, N.M.; Khan, U.I.; Mahmood, N.; Aslam, M.

    2002-01-01

    In this paper we present selected stages of an automated fingerprint identification system. The software for the system is developed employing algorithm for two-tone conversion, thinning, feature extraction and matching. Keeping FBI standards into account, it has been assured that no details of the image are lost in the comparison process. We have deployed a general parallel thinning algorithm for specialized images like fingerprints and modified the original algorithm after a series of experimentation selecting the one giving the best results. We also proposed an application-based approach for designing automated fingerprint identification systems keeping in view systems requirements. We will show that by using our system, the precision and efficiency of current fingerprint matching techniques are increased. (author)

  1. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  2. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  3. The PARTY parallel runtime system

    Science.gov (United States)

    Saltz, J. H.; Mirchandaney, Ravi; Smith, R. M.; Crowley, Kay; Nicol, D. M.

    1989-01-01

    In the present automated system for the organization of the data and computational operations entailed by parallel problems, in ways that optimize multiprocessor performance, general heuristics for partitioning program data and control are implemented by capturing and manipulating representations of a computation at run time. These heuristics are directed toward the dynamic identification and allocation of concurrent work in computations with irregular computational patterns. An optimized static-workload partitioning is computed for such repetitive-computation pattern problems as the iterative ones employed in scientific computation.

  4. The “parallel diplomacy” of Mussolini in Brazil: cultural, migratory and political ties in a power project (1922-1943)

    OpenAIRE

    Bertonha, João Fábio

    2012-01-01

    During the interwar period, Italy tried to expand its influence in Latin America and especially in Brazil. Without the military and economic power required for direct action, Italy used "alternative" methods to increase its force in the country. The methods used were the propaganda, mainly cultural, the mobilization of Italian communities in the country and the creation of political ties with the local fascist movement and with the Vargas regime. This paper seeks to understand this effort and...

  5. Improved techniques of parallel gap welding and monitoring

    Science.gov (United States)

    Mardesich, N.; Gillanders, M. S.

    1984-01-01

    Welding programs which show that parallel gas welding is a reliable process are discussed. When monitoring controls and nondestructive tests are incorporated into the process, parallel gap welding becomes more reliable and cost effective. The panel fabrication techniques and the HAC thermal cycling test indicate reliable product integrity. The design and building of automated tooling and fixturing for welding are discussed.

  6. An automated HIV-1 Env-pseudotyped virus production for global HIV vaccine trials.

    Directory of Open Access Journals (Sweden)

    Anke Schultz

    Full Text Available BACKGROUND: Infections with HIV still represent a major human health problem worldwide and a vaccine is the only long-term option to fight efficiently against this virus. Standardized assessments of HIV-specific immune responses in vaccine trials are essential for prioritizing vaccine candidates in preclinical and clinical stages of development. With respect to neutralizing antibodies, assays with HIV-1 Env-pseudotyped viruses are a high priority. To cover the increasing demands of HIV pseudoviruses, a complete cell culture and transfection automation system has been developed. METHODOLOGY/PRINCIPAL FINDINGS: The automation system for HIV pseudovirus production comprises a modified Tecan-based Cellerity system. It covers an area of 5×3 meters and includes a robot platform, a cell counting machine, a CO(2 incubator for cell cultivation and a media refrigerator. The processes for cell handling, transfection and pseudovirus production have been implemented according to manual standard operating procedures and are controlled and scheduled autonomously by the system. The system is housed in a biosafety level II cabinet that guarantees protection of personnel, environment and the product. HIV pseudovirus stocks in a scale from 140 ml to 1000 ml have been produced on the automated system. Parallel manual production of HIV pseudoviruses and comparisons (bridging assays confirmed that the automated produced pseudoviruses were of equivalent quality as those produced manually. In addition, the automated method was fully validated according to Good Clinical Laboratory Practice (GCLP guidelines, including the validation parameters accuracy, precision, robustness and specificity. CONCLUSIONS: An automated HIV pseudovirus production system has been successfully established. It allows the high quality production of HIV pseudoviruses under GCLP conditions. In its present form, the installed module enables the production of 1000 ml of virus-containing cell

  7. Synthesis of a spatial 3-RPS parallel manipulator based on physical ...

    Indian Academy of Sciences (India)

    IEEE International Conference on Robotics and Automation 1: 345–350. Lee K, Shah D K 1988 Dynamic analysis of a three degrees of freedom in-parallel actuated manipulator. IEEE Tran. Rob. Autom. 4: 361–367. Rao N M, Rao K M 2006 Multi-position dimensional synthesis of a spatial 3-RPS parallel manipulator.

  8. Simplified automated image analysis for detection and phenotyping of Mycobacterium tuberculosis on porous supports by monitoring growing microcolonies.

    Directory of Open Access Journals (Sweden)

    Alice L den Hertog

    Full Text Available BACKGROUND: Even with the advent of nucleic acid (NA amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS, as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. METHODS: Here we explore the growth of Mycobacterial tuberculosis microcolonies, imaged by automated digital microscopy, cultured on a porous aluminium oxide (PAO supports. Repeated imaging during colony growth greatly simplifies "computer vision" and presumptive identification of microcolonies was achieved here using existing publically available algorithms. Our system thus allows the growth of individual microcolonies to be monitored and critically, also to change the media during the growth phase without disrupting the microcolonies. Transfer of identified microcolonies onto selective media allowed us, within 1-2 bacterial generations, to rapidly detect the drug susceptibility of individual microcolonies, eliminating the need for time consuming subculturing or the inoculation of multiple parallel cultures. SIGNIFICANCE: Monitoring the phenotype of individual microcolonies as they grow has immense potential for research, screening, and ultimately M. tuberculosis diagnostic applications. The method described is particularly appealing with respect to speed and automation.

  9. Automated External Defibrillator

    Science.gov (United States)

    ... To Health Topics / Automated External Defibrillator Automated External Defibrillator Also known as What Is An automated external ... in survival. Training To Use an Automated External Defibrillator Learning how to use an AED and taking ...

  10. Library Automation.

    Science.gov (United States)

    Husby, Ole

    1990-01-01

    The challenges and potential benefits of automating university libraries are reviewed, with special attention given to cooperative systems. Aspects discussed include database size, the role of the university computer center, storage modes, multi-institutional systems, resource sharing, cooperative system management, networking, and intelligent…

  11. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  12. Steps toward an automated scorecard system.

    Science.gov (United States)

    Johnson, Mark

    2006-02-01

    An automated scorecard, a visual representation of process metrics, can help healthcare financial managers immediately identify trends against targets. Collaboration among all areas of the organization is essential in establishing a successful scorecard system. Essential considerations for implementing an automated scorecard system are: Clear metrics. Ownership and accountability for all metrics. Rapid feedback on performance. Collaboration. An open information culture.

  13. Automation for the Nineties: A Review Article.

    Science.gov (United States)

    Whitney, Gretchen; Glogoff, Stuart

    1994-01-01

    Describes the technical, political, economic, and cultural environments of library automation. A review is then presented of five books covering wide-ranging library automation themes, including practical experiences; planning second-generation library systems; software, systems, and services; new roles for librarians; and the national network…

  14. A comprehensive and precise quantification of the calanoid copepod Acartia tonsa (Dana) for intensive live feed cultures using an automated ZooImage system

    DEFF Research Database (Denmark)

    Vu, Minh Thi Thuy; Jepsen, Per Meyer; Hansen, Benni Winding

    2014-01-01

    culturing parameters in intensive A. tonsa culture such as hatching success, mortality rate, development rate, and reproduction output of A. tonsa can be calculated automatically. Moreover, ZooImage is set up with an inexpensive desktop scanner and computer that can easily be applied in aquaculture...

  15. "First generation" automated DNA sequencing technology.

    Science.gov (United States)

    Slatko, Barton E; Kieleczawa, Jan; Ju, Jingyue; Gardner, Andrew F; Hendrickson, Cynthia L; Ausubel, Frederick M

    2011-10-01

    Beginning in the 1980s, automation of DNA sequencing has greatly increased throughput, reduced costs, and enabled large projects to be completed more easily. The development of automation technology paralleled the development of other aspects of DNA sequencing: better enzymes and chemistry, separation and imaging technology, sequencing protocols, robotics, and computational advancements (including base-calling algorithms with quality scores, database developments, and sequence analysis programs). Despite the emergence of high-throughput sequencing platforms, automated Sanger sequencing technology remains useful for many applications. This unit provides background and a description of the "First-Generation" automated DNA sequencing technology. It also includes protocols for using the current Applied Biosystems (ABI) automated DNA sequencing machines. © 2011 by John Wiley & Sons, Inc.

  16. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    International Nuclear Information System (INIS)

    Karaçalı, Bilge; Vamvakidou, Alexandra P; Tözeren, Aydın

    2007-01-01

    Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development

  17. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  18. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  19. The RABiT: a rapid automated biodosimetry tool for radiological triage. II. Technological developments.

    Science.gov (United States)

    Garty, Guy; Chen, Youhua; Turner, Helen C; Zhang, Jian; Lyulko, Oleksandra V; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Lawrence Yao, Y; Brenner, David J

    2011-08-01

    Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. The RABiT analyses fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cut-off dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day.

  20. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  1. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  2. Parallel simulation today

    Science.gov (United States)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  3. Detection of Salmonella spp. with the BACTEC 9240 Automated Blood Culture System in 2008 - 2014 in Southern Iran (Shiraz): Biogrouping, MIC, and Antimicrobial Susceptibility Profiles of Isolates

    Science.gov (United States)

    Anvarinejad, Mojtaba; Pouladfar, Gholam Reza; Pourabbas, Bahman; Amin Shahidi, Maneli; Rafaatpour, Noroddin; Dehyadegari, Mohammad Ali; Abbasi, Pejman; Mardaneh, Jalal

    2016-01-01

    Background Human salmonellosis continues to be a major international problem, in terms of both morbidity and economic losses. The antibiotic resistance of Salmonella is an increasing public health emergency, since infections from resistant bacteria are more difficult and costly to treat. Objectives The aims of the present study were to investigate the isolation of Salmonella spp. with the BACTEC automated system from blood samples during 2008 - 2014 in southern Iran (Shiraz). Detection of subspecies, biogrouping, and antimicrobial susceptibility testing by the disc diffusion and agar dilution methods were performed. Patients and Methods A total of 19 Salmonella spp. were consecutively isolated using BACTEC from blood samples of patients between 2008 and 2014 in Shiraz, Iran. The isolates were identified as Salmonella, based on biochemical tests embedded in the API-20E system. In order to characterize the biogroups and subspecies, biochemical testing was performed. Susceptibility testing (disc diffusion and agar dilution) and extended-spectrum β-lactamase (ESBL) detection were performed according to the clinical and laboratory standards institute (CLSI) guidelines. Results Of the total 19 Salmonella spp. isolates recovered by the BACTEC automated system, all belonged to the Salmonella enterica subsp. houtenae. Five isolates (26.5%) were resistant to azithromycin. Six (31.5%) isolates with the disc diffusion method and five (26.3%) with the agar dilution method displayed resistance to nalidixic acid (minimum inhibitory concentration [MIC] > 32 μg/mL). All nalidixic acid-resistant isolates were also ciprofloxacin-sensitive. All isolates were ESBL-negative. Twenty-one percent of isolates were found to be resistant to chloramphenicol (MIC ≥ 32 μg/mL), and 16% were resistant to ampicillin (MIC ≥ 32 μg/mL). Conclusions The results indicate that multidrug-resistant (MDR) strains of Salmonella are increasing in number, and fewer antibiotics may be useful for

  4. Universal protocol for the rapid automated detection of carbapenem-resistant Gram-negative bacilli directly from blood cultures by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOF/MS).

    Science.gov (United States)

    Oviaño, Marina; Sparbier, Katrin; Barba, Maria José; Kostrzewa, Markus; Bou, Germán

    2016-12-01

    Detection of carbapenemase-producing bacteria directly from blood cultures is a major challenge, as patients with bacteraemia are critically ill. Early detection can be helpful for selection of the most appropriate antibiotic therapy as well as adequate control of outbreaks. In the current study, a novel matrix-assisted laser desorption/ionisation time-of-flight (MALDI-TOF)-based method was developed for the rapid, automated detection of carbapenemase-producing Enterobacteriaceae, Pseudomonas aeruginosa and Acinetobacter baumannii directly from blood cultures. Carbapenemase activity was determined in 30 min by measuring hydrolysis of imipenem (0.31 mg/mL) in blood cultures spiked with a series of 119 previously characterised isolates, 81 of which carried a carbapenemase enzyme (10 bla KPC , 10 bla VIM , 10 bla NDM , 10 bla IMP , 26 bla OXA-48-type , 9 bla OXA-23 , 1 bla OXA-237 , 3 bla OXA-24 and 2 bla OXA-58 ) . Twenty blood cultures obtained from bacteraemic patients carrying bla OXA-48 -producing isolates were also analysed using the same protocol. Analysis was performed using MALDI-TOF Biotyper ® Compass software, which automatically provides a result of sensitivity or resistance, calculated as the logRQ or ratio of hydrolysis of the antibiotic. This assay is simple to perform, inexpensive, time saving, universal for Gram-negative bacilli, and highly reliable (overall sensitivity and specificity of 98% and 100%, respectively). Moreover, the protocol could be established as a standardised method in clinical laboratories as it does not require specialised training in mass spectrometry. Copyright © 2016 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  5. Automated Distributed Simulation in Ptolemy II

    DEFF Research Database (Denmark)

    Lázaro Cuadrado, Daniel; Ravn, Anders Peter; Koch, Peter

    2007-01-01

    the ensuing communication and synchronization problems. Very often the designer has to explicitly specify extra information concerning distribution for the framework to make an effort to exploit parallelism. This paper presents Automated Distributed Simulation (ADS), which allows the designer to forget about...... implement pipelining techniques so iterative models with purely sequential topologies can benefit from ADS....

  6. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  7. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  8. Three-dimensional growth of human endothelial cells in an automated cell culture experiment container during the SpaceX CRS-8 ISS space mission - The SPHEROIDS project.

    Science.gov (United States)

    Pietsch, Jessica; Gass, Samuel; Nebuloni, Stefano; Echegoyen, David; Riwaldt, Stefan; Baake, Christin; Bauer, Johann; Corydon, Thomas J; Egli, Marcel; Infanger, Manfred; Grimm, Daniela

    2017-04-01

    Human endothelial cells (ECs) were sent to the International Space Station (ISS) to determine the impact of microgravity on the formation of three-dimensional structures. For this project, an automatic experiment unit (EU) was designed allowing cell culture in space. In order to enable a safe cell culture, cell nourishment and fixation after a pre-programmed timeframe, the materials used for construction of the EUs were tested in regard to their biocompatibility. These tests revealed a high biocompatibility for all parts of the EUs, which were in contact with the cells or the medium used. Most importantly, we found polyether ether ketones for surrounding the incubation chamber, which kept cellular viability above 80% and allowed the cells to adhere as long as they were exposed to normal gravity. After assembling the EU the ECs were cultured therein, where they showed good cell viability at least for 14 days. In addition, the functionality of the automatic medium exchange, and fixation procedures were confirmed. Two days before launch, the ECs were cultured in the EUs, which were afterwards mounted on the SpaceX CRS-8 rocket. 5 and 12 days after launch the cells were fixed. Subsequent analyses revealed a scaffold-free formation of spheroids in space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Autonomous Systems: Habitat Automation

    Data.gov (United States)

    National Aeronautics and Space Administration — The Habitat Automation Project Element within the Autonomous Systems Project is developing software to automate the automation of habitats and other spacecraft. This...

  10. An Automation Planning Primer.

    Science.gov (United States)

    Paynter, Marion

    1988-01-01

    This brief planning guide for library automation incorporates needs assessment and evaluation of options to meet those needs. A bibliography of materials on automation planning and software reviews, library software directories, and library automation journals is included. (CLB)

  11. Automated Budget System -

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  12. Applying Parallel Processing Techniques to Tether Dynamics Simulation

    Science.gov (United States)

    Wells, B. Earl

    1996-01-01

    The focus of this research has been to determine the effectiveness of applying parallel processing techniques to a sizable real-world problem, the simulation of the dynamics associated with a tether which connects two objects in low earth orbit, and to explore the degree to which the parallelization process can be automated through the creation of new software tools. The goal has been to utilize this specific application problem as a base to develop more generally applicable techniques.

  13. Automation 2017

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2017-01-01

    This book consists of papers presented at Automation 2017, an international conference held in Warsaw from March 15 to 17, 2017. It discusses research findings associated with the concepts behind INDUSTRY 4.0, with a focus on offering a better understanding of and promoting participation in the Fourth Industrial Revolution. Each chapter presents a detailed analysis of a specific technical problem, in most cases followed by a numerical analysis, simulation and description of the results of implementing the solution in a real-world context. The theoretical results, practical solutions and guidelines presented are valuable for both researchers working in the area of engineering sciences and practitioners looking for solutions to industrial problems. .

  14. Marketing automation

    Directory of Open Access Journals (Sweden)

    TODOR Raluca Dania

    2017-01-01

    Full Text Available The automation of the marketing process seems to be nowadays, the only solution to face the major changes brought by the fast evolution of technology and the continuous increase in supply and demand. In order to achieve the desired marketing results, businessis have to employ digital marketing and communication services. These services are efficient and measurable thanks to the marketing technology used to track, score and implement each campaign. Due to the technical progress, the marketing fragmentation, demand for customized products and services on one side and the need to achieve constructive dialogue with the customers, immediate and flexible response and the necessity to measure the investments and the results on the other side, the classical marketing approached had changed continue to improve substantially.

  15. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  16. Parallelization in Modern C++

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  17. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  18. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  19. PCLIPS: Parallel CLIPS

    Science.gov (United States)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  20. Parallel MR imaging.

    Science.gov (United States)

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2012-07-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the undersampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. Copyright © 2012 Wiley Periodicals, Inc.

  1. Both Automation and Paper.

    Science.gov (United States)

    Purcell, Royal

    1988-01-01

    Discusses the concept of a paperless society and the current situation in library automation. Various applications of automation and telecommunications are addressed, and future library automation is considered. Automation at the Monroe County Public Library in Bloomington, Indiana, is described as an example. (MES)

  2. Embodied and Distributed Parallel DJing.

    Science.gov (United States)

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  3. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    Hemanth-Kumar, K.; Young, L.C.

    1995-01-01

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  4. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  5. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  6. Tiling as a Durable Abstraction for Parallelism and Data Locality

    Energy Technology Data Exchange (ETDEWEB)

    Unat, Didem [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chan, Cy P. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Zhang, Weiqun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-11-18

    Tiling is a useful loop transformation for expressing parallelism and data locality. Automated tiling transformations that preserve data-locality are increasingly important due to hardware trends towards massive parallelism and the increasing costs of data movement relative to the cost of computing. We propose TiDA as a durable tiling abstraction that centralizes parameterized tiling information within array data types with minimal changes to the source code. The data layout information can be used by the compiler and runtime to automatically manage parallelism, optimize data locality, and schedule tasks intelligently. In this study, we present the design features and early interface of TiDA along with some preliminary results.

  7. Automated Parallelization of Timed Petri-Net Simulations

    Science.gov (United States)

    1993-12-01

    a suitor finds a previously engaged debutante such that the debutante prefers the new suitor to her engaged, the old engagement is broken and a new...sexes. A minor modification to the algorithm has the effect of selecting sexes: once an LP becomes engaged playing the role of a debutante , it is not

  8. Parallelizing Deadlock Resolution in Symbolic Synthesis of Distributed Programs

    Science.gov (United States)

    2008-01-01

    infinite com- putation by stuttering at sl. On the other hand, if there exists a state sd such that there is no outgoing transition (or a self-loop...Automated Software Engineering (ASE), pages 44–53, 2007. [12] J. Ezekiel and G. Lüttgen. Measuring and evaluating parallel state-space exploration

  9. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  10. CS-Studio Scan System Parallelization

    Energy Technology Data Exchange (ETDEWEB)

    Kasemir, Kay [ORNL; Pearson, Matthew R [ORNL

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  11. Evaluation of an ethidium monoazide-enhanced 16S rDNA real-time polymerase chain reaction assay for bacterial screening of platelet concentrates and comparison with automated culture.

    Science.gov (United States)

    Garson, Jeremy A; Patel, Poorvi; McDonald, Carl; Ball, Joanne; Rosenberg, Gillian; Tettmar, Kate I; Brailsford, Susan R; Pitt, Tyrone; Tedder, Richard S

    2014-03-01

    Culture-based systems are currently the preferred means for bacterial screening of platelet (PLT) concentrates. Alternative bacterial detection techniques based on nucleic acid amplification have also been developed but these have yet to be fully evaluated. In this study we evaluate a novel 16S rDNA polymerase chain reaction (PCR) assay and compare its performance with automated culture. A total of 2050 time-expired, 176 fresh, and 400 initial-reactive PLT packs were tested by real-time PCR using broadly reactive 16S primers and a "universal" probe (TaqMan, Invitrogen). PLTs were also tested using a microbial detection system (BacT/ALERT, bioMérieux) under aerobic and anaerobic conditions. Seven of 2050 (0.34%) time-expired PLTs were found repeat reactive by PCR on the initial nucleic acid extract but none of these was confirmed positive on testing frozen second aliquots. BacT/ALERT testing also failed to confirm any time-expired PLTs positive on repeat testing, although 0.24% were reactive on the first test. Three of the 400 "initial-reactive" PLT packs were found by both PCR and BacT/ALERT to be contaminated (Escherichia coli, Listeria monocytogenes, and Streptococcus vestibularis identified) and 14 additional packs were confirmed positive by BacT/ALERT only. In 13 of these cases the contaminating organisms were identified as anaerobic skin or oral commensals and the remaining pack was contaminated with Streptococcus pneumoniae. These results demonstrate that the 16S PCR assay is less sensitive than BacT/ALERT and inappropriate for early testing of concentrates. However, rapid PCR assays such as this may be suitable for a strategy of late or prerelease testing. © 2013 American Association of Blood Banks.

  12. Design and validation of a parallelized micro-photobioreactor enabling phototrophic bioprocess development at elevated throughput.

    Science.gov (United States)

    Morschett, Holger; Schiprowski, Danny; Müller, Carsten; Mertens, Kolja; Felden, Pamela; Meyer, Jörg; Wiechert, Wolfgang; Oldiges, Marco

    2017-01-01

    Microalgae offer great potential for the industrial production of numerous compounds, but most of the currently available processes fail on economic aspects. Due to the lack of appropriate microcultivation systems, especially screening and early stage laboratory process characterization limit throughput in process development. Consequently, a demand for high throughput photobioreactors has recently been identified upon which some prototype systems emerged. However, compared to microbial microbioreactors, the systems so far introduced suffer from at least one of several drawbacks, that is, inhomogeneous conditions, poor mixing or excessive evaporation. In this context, a microtiter plate based micro-photobioreactor was developed enabling 48-fold parallelized cultivation. Strict control of the process conditions enabled a high comparability between the distinct wells of one plate (±5% fluctuation in biomass formation). The small scale, resulting in a beneficial surface to volume ratio, as well as the fast mixing due to rigorous orbital shaking, ensured an excellent light supply of the cultures. Moreover, non-invasive online biomass quantification was implemented via a scattered light analyzer that is capable of biomass measurements during continuous illumination of the cultures. The system was shown to be especially qualified for parallelized laboratory screening applications like for instance media optimization. Easy automation via integration into a liquid handling platform is given by design. Thereby, the presented micro-photobioreactor system significantly contributes to improving the time efficiency during the development of phototrophic bioprocesses. Biotechnol. Bioeng. 2017;114: 122-131. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  14. On extending parallelism to serial simulators

    Science.gov (United States)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  15. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  16. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  17. Parallel magnetic resonance imaging

    International Nuclear Information System (INIS)

    Larkman, David J; Nunes, Rita G

    2007-01-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed. (invited topical review)

  18. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  19. Effective Parallel Algorithm Animation

    Science.gov (United States)

    1994-03-01

    of representation but also allows different functions to be treated as a dingle entity. In this way the AAARF implementation allows the user to group...Deadlocks in Distributed Digital Logic Simulation." 26th ACM/IEEE Design Automation Conference. 81-86. 1989. 78. Sporrer, Christian and Herbert Bauer

  20. Autonomy and Automation

    Science.gov (United States)

    Shively, Jay

    2017-01-01

    A significant level of debate and confusion has surrounded the meaning of the terms autonomy and automation. Automation is a multi-dimensional concept, and we propose that Remotely Piloted Aircraft Systems (RPAS) automation should be described with reference to the specific system and task that has been automated, the context in which the automation functions, and other relevant dimensions. In this paper, we present definitions of automation, pilot in the loop, pilot on the loop and pilot out of the loop. We further propose that in future, the International Civil Aviation Organization (ICAO) RPAS Panel avoids the use of the terms autonomy and autonomous when referring to automated systems on board RPA. Work Group 7 proposes to develop, in consultation with other workgroups, a taxonomy of Levels of Automation for RPAS.

  1. An automated swimming respirometer

    DEFF Research Database (Denmark)

    STEFFENSEN, JF; JOHANSEN, K; BUSHNELL, PG

    1984-01-01

    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  2. Configuration Management Automation (CMA) -

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  3. An automated system for processing electrodermal activity.

    Science.gov (United States)

    Frantzidis, Christos A; Konstantinidis, Evdokimos; Pappas, Costas; Bamidis, Panagiotis D

    2009-01-01

    A new approach is presented in this paper for the display and processing of electrodermal activity. It offers a fully automated interface for the pre-processing and scoring individual skin conductance responses (SCRs). The application supports parallel processing by means of multiple threads. Batch processing is also available. The XML format is used to describe the derived features. The system is employed to analyze emotion-related data.

  4. Parallel time integration software

    Energy Technology Data Exchange (ETDEWEB)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  5. Automation in College Libraries.

    Science.gov (United States)

    Werking, Richard Hume

    1991-01-01

    Reports the results of a survey of the "Bowdoin List" group of liberal arts colleges. The survey obtained information about (1) automation modules in place and when they had been installed; (2) financing of automation and its impacts on the library budgets; and (3) library director's views on library automation and the nature of the…

  6. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  7. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  8. SPINning parallel systems software

    International Nuclear Information System (INIS)

    Matlin, O.S.; Lusk, E.; McCune, W.

    2002-01-01

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  9. Automation in Clinical Microbiology

    Science.gov (United States)

    Ledeboer, Nathan A.

    2013-01-01

    Historically, the trend toward automation in clinical pathology laboratories has largely bypassed the clinical microbiology laboratory. In this article, we review the historical impediments to automation in the microbiology laboratory and offer insight into the reasons why we believe that we are on the cusp of a dramatic change that will sweep a wave of automation into clinical microbiology laboratories. We review the currently available specimen-processing instruments as well as the total laboratory automation solutions. Lastly, we outline the types of studies that will need to be performed to fully assess the benefits of automation in microbiology laboratories. PMID:23515547

  10. Automation of industrial bioprocesses.

    Science.gov (United States)

    Beyeler, W; DaPra, E; Schneider, K

    2000-01-01

    The dramatic development of new electronic devices within the last 25 years has had a substantial influence on the control and automation of industrial bioprocesses. Within this short period of time the method of controlling industrial bioprocesses has changed completely. In this paper, the authors will use a practical approach focusing on the industrial applications of automation systems. From the early attempts to use computers for the automation of biotechnological processes up to the modern process automation systems some milestones are highlighted. Special attention is given to the influence of Standards and Guidelines on the development of automation systems.

  11. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  12. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    1998-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were

  13. Parallel k-means++

    Energy Technology Data Exchange (ETDEWEB)

    2017-04-04

    A parallelization of the k-means++ seed selection algorithm on three distinct hardware platforms: GPU, multicore CPU, and multithreaded architecture. K-means++ was developed by David Arthur and Sergei Vassilvitskii in 2007 as an extension of the k-means data clustering technique. These algorithms allow people to cluster multidimensional data, by attempting to minimize the mean distance of data points within a cluster. K-means++ improved upon traditional k-means by using a more intelligent approach to selecting the initial seeds for the clustering process. While k-means++ has become a popular alternative to traditional k-means clustering, little work has been done to parallelize this technique. We have developed original C++ code for parallelizing the algorithm on three unique hardware architectures: GPU using NVidia's CUDA/Thrust framework, multicore CPU using OpenMP, and the Cray XMT multithreaded architecture. By parallelizing the process for these platforms, we are able to perform k-means++ clustering much more quickly than it could be done before.

  14. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  15. Expressing Parallelism with ROOT

    Science.gov (United States)

    Piparo, D.; Tejedor, E.; Guiraud, E.; Ganis, G.; Mato, P.; Moneta, L.; Valls Pla, X.; Canal, P.

    2017-10-01

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  16. Massively parallel signature sequencing.

    Science.gov (United States)

    Zhou, Daixing; Rao, Mahendra S; Walker, Roger; Khrebtukova, Irina; Haudenschild, Christian D; Miura, Takumi; Decola, Shannon; Vermaas, Eric; Moon, Keith; Vasicek, Thomas J

    2006-01-01

    Massively parallel signature sequencing is an ultra-high throughput sequencing technology. It can simultaneously sequence millions of sequence tags, and, therefore, is ideal for whole genome analysis. When applied to expression profiling, it reveals almost every transcript in the sample and provides its accurate expression level. This chapter describes the technology and its application in establishing stem cell transcriptome databases.

  17. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  18. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Quinn O. [Iowa State Univ., Ames, IA (United States)

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  19. Expressing Parallelism with ROOT

    Energy Technology Data Exchange (ETDEWEB)

    Piparo, D. [CERN; Tejedor, E. [CERN; Guiraud, E. [CERN; Ganis, G. [CERN; Mato, P. [CERN; Moneta, L. [CERN; Valls Pla, X. [CERN; Canal, P. [Fermilab

    2017-11-22

    The need for processing the ever-increasing amount of data generated by the LHC experiments in a more efficient way has motivated ROOT to further develop its support for parallelism. Such support is being tackled both for shared-memory and distributed-memory environments. The incarnations of the aforementioned parallelism are multi-threading, multi-processing and cluster-wide executions. In the area of multi-threading, we discuss the new implicit parallelism and related interfaces, as well as the new building blocks to safely operate with ROOT objects in a multi-threaded environment. Regarding multi-processing, we review the new MultiProc framework, comparing it with similar tools (e.g. multiprocessing module in Python). Finally, as an alternative to PROOF for cluster-wide executions, we introduce the efforts on integrating ROOT with state-of-the-art distributed data processing technologies like Spark, both in terms of programming model and runtime design (with EOS as one of the main components). For all the levels of parallelism, we discuss, based on real-life examples and measurements, how our proposals can increase the productivity of scientists.

  20. Parallel Splash Belief Propagation

    Science.gov (United States)

    2010-08-01

    Service, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of...heaps: An alternative to Fibonacci heaps with applications to parallel computation. Communications of the ACM, 31:1343–1354, 1988. G. Elidan, I. Mcgraw

  1. Parallel Libraries to support High-Level Programming

    DEFF Research Database (Denmark)

    Larsen, Morten Nørgaard

    parallel programs by raising the abstraction level, so that the programmers can focus on implementing their algorithms and not on the details of the underlying hardware. The outcome has ranged from ideas on of automating the parallelization of sequential programs to presenting a cluster of machines...... as they would be a single machine. In between is a number of tools helping the programmers handle communication, share data, run loops in parallel, handle algorithms mining huge amounts of data etc. Even though most of them do a good job performance-wise, almost all of them require that the programmers learn......The development of computer architectures during the last ten years have forced programmers to move towards writing parallel programs instead of sequential ones. The homogenous multi-core architectures from the major CPU producers like Intel and AMD has led this trend, but the introduction...

  2. Automated radiometric detection of bacteria

    International Nuclear Information System (INIS)

    Waters, J.R.

    1974-01-01

    A new radiometric method called BACTEC, used for the detection of bacteria in cultures or in supposedly sterile samples, was discussed from the standpoint of methodology, both automated and semi-automated. Some of the results obtained so far were reported and some future applications and development possibilities were described. In this new method, the test sample is incubated in a sealed vial with a liquid culture medium containing a 14 C-labeled substrate. If bacteria are present, they break down the substrate, producing 14 CO 2 which is periodically extracted from the vial as a gas and is tested for radioactivity. If this gaseous radioactivity exceeds a threshold level, it is evidence of bacterial presence and growth in the test vial. The first application was for the detection of bacteria in the blood cultures of hospital patients. Data were presented showing typical results. Also discussed were future applications, such as rapid screening for bacteria in urine industrial sterility testing and the disposal of used 14 C substrates. (Mukohata, S.)

  3. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    Science.gov (United States)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  4. Automated Fresnel lens tester system

    Energy Technology Data Exchange (ETDEWEB)

    Phipps, G.S.

    1981-07-01

    An automated data collection system controlled by a desktop computer has been developed for testing Fresnel concentrators (lenses) intended for solar energy applications. The system maps the two-dimensional irradiance pattern (image) formed in a plane parallel to the lens, whereas the lens and detector assembly track the sun. A point detector silicon diode (0.5-mm-dia active area) measures the irradiance at each point of an operator-defined rectilinear grid of data positions. Comparison with a second detector measuring solar insolation levels results in solar concentration ratios over the image plane. Summation of image plane energies allows calculation of lens efficiencies for various solar cell sizes. Various graphical plots of concentration ratio data help to visualize energy distribution patterns.

  5. Proceedings of the 1988 IEEE international conference on robotics and automation. Volume 1

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    These proceedings compile the papers presented at the international conference (1988) sponsored by IEEE Council on ''Robotics and Automation''. The subjects discussed were: automation and robots of nuclear power stations; algorithms of multiprocessors; parallel processing and computer architecture; and U.S. DOE research programs on nuclear power plants

  6. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  7. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  8. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  9. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  10. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  11. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  12. Social aspects of automation: Some critical insights

    Science.gov (United States)

    Nouzil, Ibrahim; Raza, Ali; Pervaiz, Salman

    2017-09-01

    Sustainable development has been recognized globally as one of the major driving forces towards the current technological innovations. To achieve sustainable development and attain its associated goals, it is very important to properly address its concerns in different aspects of technological innovations. Several industrial sectors have enjoyed productivity and economic gains due to advent of automation technology. It is important to characterize sustainability for the automation technology. Sustainability is key factor that will determine the future of our neighbours in time and it must be tightly wrapped around the double-edged sword of technology. In this study, different impacts of automation have been addressed using the ‘Circles of Sustainability’ approach as a framework, covering economic, political, cultural and ecological aspects and their implications. A systematic literature review of automation technology from its inception is outlined and plotted against its many outcomes covering a broad spectrum. The study is more focused towards the social aspects of the automation technology. The study also reviews literature to analyse the employment deficiency as one end of the social impact spectrum. On the other end of the spectrum, benefits to society through technological advancements, such as the Internet of Things (IoT) coupled with automation are presented.

  13. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.

    2015-01-01

    -parallel TFO strand was modified with Y with one or two insertions at the end of the TFO strand, the thermal stability was increased 1.2 °C and 3 °C at pH 7.2, respectively, whereas one insertion in the middle of the TFO strand decreased the thermal stability 1.4 °C compared to the wild type oligonucleotide......The phosphoramidites of DNA monomers of 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine (Y) and 7-(3-aminopropyn-1-yl)-8-aza-7-deazaadenine LNA (Z) are synthesized, and the thermal stability at pH 7.2 and 8.2 of anti-parallel triplexes modified with these two monomers is determined. When, the anti...... chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm = -9.1 °C). Also the thermal stability decreased...

  14. Automation systems for radioimmunoassay

    International Nuclear Information System (INIS)

    Yamasaki, Paul

    1974-01-01

    The application of automation systems for radioimmunoassay (RIA) was discussed. Automated systems could be useful in the second step, of the four basic processes in the course of RIA, i.e., preparation of sample for reaction. There were two types of instrumentation, a semi-automatic pipete, and a fully automated pipete station, both providing for fast and accurate dispensing of the reagent or for the diluting of sample with reagent. Illustrations of the instruments were shown. (Mukohata, S.)

  15. Automated stopcock actuator

    OpenAIRE

    Vandehey, N. T.; O\\'Neil, J. P.

    2015-01-01

    Introduction We have developed a low-cost stopcock valve actuator for radiochemistry automation built using a stepper motor and an Arduino, an open-source single-board microcontroller. The con-troller hardware can be programmed to run by serial communication or via two 5–24 V digital lines for simple integration into any automation control system. This valve actuator allows for automated use of a single, disposable stopcock, providing a number of advantages over stopcock manifold systems ...

  16. Automated Analysis of Accountability

    DEFF Research Database (Denmark)

    Bruni, Alessandro; Giustolisi, Rosario; Schürmann, Carsten

    2017-01-01

    that are amenable to automated verification. Our definitions are general enough to be applied to different classes of protocols and different automated security verification tools. Furthermore, we point out formally the relation between verifiability and accountability. We validate our definitions...... with the automatic verification of three protocols: a secure exam protocol, Google’s Certificate Transparency, and an improved version of Bingo Voting. We find through automated verification that all three protocols satisfy verifiability while only the first two protocols meet accountability....

  17. Economical parallel oligonucleotide and peptide synthesizer - PET OLIGATOR

    Czech Academy of Sciences Publication Activity Database

    Lebl, M.; Pistek, Ch.; Hachmann, J.; Mudra, Petr; Pešek, Václav; Pokorný, Vít; Poncar, Pavel; Ženíšek, Karel

    2007-01-01

    Roč. 13, 1/2 (2007), s. 367-375 ISSN 1573-3149 Grant - others:NIH SBIR(US) R43 GM61511-01; NIH SBIR(US) R43 GM58981-01 Institutional research plan: CEZ:AV0Z40550506 Keywords : automated synthesizer * centrifugation * parallel synthesis Subject RIV: CC - Organic Chemistry Impact factor: 0.971, year: 2007

  18. Automated manufacturing of chimeric antigen receptor T cells for adoptive immunotherapy using CliniMACS prodigy.

    Science.gov (United States)

    Mock, Ulrike; Nickolay, Lauren; Philip, Brian; Cheung, Gordon Weng-Kit; Zhan, Hong; Johnston, Ian C D; Kaiser, Andrew D; Peggs, Karl; Pule, Martin; Thrasher, Adrian J; Qasim, Waseem

    2016-08-01

    Novel cell therapies derived from human T lymphocytes are exhibiting enormous potential in early-phase clinical trials in patients with hematologic malignancies. Ex vivo modification of T cells is currently limited to a small number of centers with the required infrastructure and expertise. The process requires isolation, activation, transduction, expansion and cryopreservation steps. To simplify procedures and widen applicability for clinical therapies, automation of these procedures is being developed. The CliniMACS Prodigy (Miltenyi Biotec) has recently been adapted for lentiviral transduction of T cells and here we analyse the feasibility of a clinically compliant T-cell engineering process for the manufacture of T cells encoding chimeric antigen receptors (CAR) for CD19 (CAR19), a widely targeted antigen in B-cell malignancies. Using a closed, single-use tubing set we processed mononuclear cells from fresh or frozen leukapheresis harvests collected from healthy volunteer donors. Cells were phenotyped and subjected to automated processing and activation using TransAct, a polymeric nanomatrix activation reagent incorporating CD3/CD28-specific antibodies. Cells were then transduced and expanded in the CentriCult-Unit of the tubing set, under stabilized culture conditions with automated feeding and media exchange. The process was continuously monitored to determine kinetics of expansion, transduction efficiency and phenotype of the engineered cells in comparison with small-scale transductions run in parallel. We found that transduction efficiencies, phenotype and function of CAR19 T cells were comparable with existing procedures and overall T-cell yields sufficient for anticipated therapeutic dosing. The automation of closed-system T-cell engineering should improve dissemination of emerging immunotherapies and greatly widen applicability. Copyright © 2016. Published by Elsevier Inc.

  19. Management Planning for Workplace Automation.

    Science.gov (United States)

    McDole, Thomas L.

    Several factors must be considered when implementing office automation. Included among these are whether or not to automate at all, the effects of automation on employees, requirements imposed by automation on the physical environment, effects of automation on the total organization, and effects on clientele. The reasons behind the success or…

  20. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Collart, F.

    2001-01-01

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  2. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  3. Automation in the clinical microbiology laboratory.

    Science.gov (United States)

    Novak, Susan M; Marlowe, Elizabeth M

    2013-09-01

    Imagine a clinical microbiology laboratory where a patient's specimens are placed on a conveyor belt and sent on an automation line for processing and plating. Technologists need only log onto a computer to visualize the images of a culture and send to a mass spectrometer for identification. Once a pathogen is identified, the system knows to send the colony for susceptibility testing. This is the future of the clinical microbiology laboratory. This article outlines the operational and staffing challenges facing clinical microbiology laboratories and the evolution of automation that is shaping the way laboratory medicine will be practiced in the future. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Parallel processing of remotely sensed data: Application to the ATSR-2 instrument

    Science.gov (United States)

    Simpson, J.; McIntire, T.; Berg, J.; Tsou, Y.

    2007-01-01

    Massively parallel computational paradigms can mitigate many issues associated with the analysis of large and complex remotely sensed data sets. Recently, the Beowulf cluster has emerged as the most attractive, massively parallel architecture due to its low cost and high performance. Whereas most Beowulf designs have emphasized numerical modeling applications, the Parallel Image Processing Environment (PIPE) specifically addresses the unique requirements of remote sensing applications. Automated, parallelization of user-defined analyses is fully supported. A neural network application, applied to Along Track Scanning Radiometer-2 (ATSR-2) data shows the advantages and performance characteristics of PIPE.

  5. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  6. Very Large Parallel Data Flow

    Science.gov (United States)

    1988-03-01

    billion characters. Teradata Corporation’s DBC/1012, a parallel relational database machine, is another high performance engine for large database...the volume of data being processed. Such parallelism is currently exploited in multiproces- sor relational database machines such as the Teradata DBC...scale parallelism can be achieved in all-solutions relations using or-parallelism in a multiprocessor architecture. For instance, the Teradata database

  7. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  8. Parallel Architectures and Bioinspired Algorithms

    CERN Document Server

    Pérez, José; Lanchares, Juan

    2012-01-01

    This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.

  9. Integrating Task and Data Parallelism

    OpenAIRE

    Massingill, Berna

    1993-01-01

    Many models of concurrency and concurrent programming have been proposed; most can be categorized as either task-parallel (based on functional decomposition) or data-parallel (based on data decomposition). Task-parallel models are most effective for expressing irregular computations; data-parallel models are most effective for expressing regular computations. Some computations, however, exhibit both regular and irregular aspects. For such computations, a better programming model is one that i...

  10. Parallel Pascal - An extended Pascal for parallel computers

    Science.gov (United States)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  11. Massively Parallel Genetics.

    Science.gov (United States)

    Shendure, Jay; Fields, Stanley

    2016-06-01

    Human genetics has historically depended on the identification of individuals whose natural genetic variation underlies an observable trait or disease risk. Here we argue that new technologies now augment this historical approach by allowing the use of massively parallel assays in model systems to measure the functional effects of genetic variation in many human genes. These studies will help establish the disease risk of both observed and potential genetic variants and to overcome the problem of "variants of uncertain significance." Copyright © 2016 by the Genetics Society of America.

  12. Parallel sphere rendering

    Energy Technology Data Exchange (ETDEWEB)

    Krogh, M.; Hansen, C.; Painter, J. [Los Alamos National Lab., NM (United States); de Verdiere, G.C. [CEA Centre d`Etudes de Limeil, 94 - Villeneuve-Saint-Georges (France)

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  13. Parallel Repetition From Fortification

    OpenAIRE

    Moshkovitz Aaronson, Dana Hadar

    2014-01-01

    The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games – “fortification” – and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short. As corollaries, we obtain: (1) Starting from...

  14. Parallelized direct execution simulation of message-passing parallel programs

    Science.gov (United States)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  15. Automated System Marketplace 1994.

    Science.gov (United States)

    Griffiths, Jose-Marie; Kertis, Kimberly

    1994-01-01

    Reports results of the 1994 Automated System Marketplace survey based on responses from 60 vendors. Highlights include changes in the library automation marketplace; estimated library systems revenues; minicomputer and microcomputer-based systems; marketplace trends; global markets and mergers; research needs; new purchase processes; and profiles…

  16. Automation benefits BWR customers

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    A description is given of the increasing use of automation at General Electric's Wilmington fuel fabrication plant. Computerised systems and automated equipment perform a large number of inspections, inventory and process operations, and new advanced systems are being continuously introduced to reduce operator errors and expand product reliability margins. (U.K.)

  17. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  18. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and

  19. Identity Management Processes Automation

    Directory of Open Access Journals (Sweden)

    A. Y. Lavrukhin

    2010-03-01

    Full Text Available Implementation of identity management systems consists of two main parts, consulting and automation. The consulting part includes development of a role model and identity management processes description. The automation part is based on the results of consulting part. This article describes the most important aspects of IdM implementation.

  20. Work and Programmable Automation.

    Science.gov (United States)

    DeVore, Paul W.

    A new industrial era based on electronics and the microprocessor has arrived, an era that is being called intelligent automation. Intelligent automation, in the form of robots, replaces workers, and the new products, using microelectronic devices, require significantly less labor to produce than the goods they replace. The microprocessor thus…

  1. Library Automation in Pakistan.

    Science.gov (United States)

    Haider, Syed Jalaluddin

    1998-01-01

    Examines the state of library automation in Pakistan. Discusses early developments; financial support by the Netherlands Library Development Project (Pakistan); lack of automated systems in college/university and public libraries; usage by specialist libraries; efforts by private-sector libraries and the National Library in Pakistan; commonly used…

  2. Library Automation Style Guide.

    Science.gov (United States)

    Gaylord Bros., Liverpool, NY.

    This library automation style guide lists specific terms and names often used in the library automation industry. The terms and/or acronyms are listed alphabetically and each is followed by a brief definition. The guide refers to the "Chicago Manual of Style" for general rules, and a notes section is included for the convenience of individual…

  3. Planning for Office Automation.

    Science.gov (United States)

    Sherron, Gene T.

    1982-01-01

    The steps taken toward office automation by the University of Maryland are described. Office automation is defined and some types of word processing systems are described. Policies developed in the writing of a campus plan are listed, followed by a section on procedures adopted to implement the plan. (Author/MLW)

  4. The Automated Office.

    Science.gov (United States)

    Naclerio, Nick

    1979-01-01

    Clerical personnel may be able to climb career ladders as a result of office automation and expanded job opportunities in the word processing area. Suggests opportunities in an automated office system and lists books and periodicals on word processing for counselors and teachers. (MF)

  5. Automating the Small Library.

    Science.gov (United States)

    Skapura, Robert

    1987-01-01

    Discusses the use of microcomputers for automating school libraries, both for entire systems and for specific library tasks. Highlights include available library management software, newsletters that evaluate software, constructing an evaluation matrix, steps to consider in library automation, and a brief discussion of computerized card catalogs.…

  6. Advances in inspection automation

    Science.gov (United States)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  7. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  8. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  9. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  10. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  11. Fast parallel event reconstruction

    CERN Document Server

    CERN. Geneva

    2010-01-01

    On-line processing of large data volumes produced in modern HEP experiments requires using maximum capabilities of modern and future many-core CPU and GPU architectures.One of such powerful feature is a SIMD instruction set, which allows packing several data items in one register and to operate on all of them, thus achievingmore operations per clock cycle. Motivated by the idea of using the SIMD unit ofmodern processors, the KF based track fit has been adapted for parallelism, including memory optimization, numerical analysis, vectorization with inline operator overloading, and optimization using SDKs. The speed of the algorithm has been increased in 120000 times with 0.1 ms/track, running in parallel on 16 SPEs of a Cell Blade computer.  Running on a Nehalem CPU with 8 cores it shows the processing speed of 52 ns/track using the Intel Threading Building Blocks. The same KF algorithm running on an Nvidia GTX 280 in the CUDA frameworkprovi...

  12. Theory of Parallel Mechanisms

    CERN Document Server

    Huang, Zhen; Ding, Huafeng

    2013-01-01

    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  13. Massively Parallel QCD

    Energy Technology Data Exchange (ETDEWEB)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  14. Automation of Hubble Space Telescope Mission Operations

    Science.gov (United States)

    Burley, Richard; Goulet, Gregory; Slater, Mark; Huey, William; Bassford, Lynn; Dunham, Larry

    2012-01-01

    On June 13, 2011, after more than 21 years, 115 thousand orbits, and nearly 1 million exposures taken, the operation of the Hubble Space Telescope successfully transitioned from 24x7x365 staffing to 815 staffing. This required the automation of routine mission operations including telemetry and forward link acquisition, data dumping and solid-state recorder management, stored command loading, and health and safety monitoring of both the observatory and the HST Ground System. These changes were driven by budget reductions, and required ground system and onboard spacecraft enhancements across the entire operations spectrum, from planning and scheduling systems to payload flight software. Changes in personnel and staffing were required in order to adapt to the new roles and responsibilities required in the new automated operations era. This paper will provide a high level overview of the obstacles to automating nominal HST mission operations, both technical and cultural, and how those obstacles were overcome.

  15. Balanço de água por aquisição automática de dados em cultura de trigo (Triticum aestivum L. The daily water consumption of a wheat culture using atmospheric and soil data

    Directory of Open Access Journals (Sweden)

    Celso Luiz Prevedello

    2007-02-01

    Full Text Available Utilizando técnica de aquisição automática de dados atmosféricos e de teor de água do solo, este trabalho quantificou o consumo diário de água em cultura do trigo em Latossolo Vermelho do município de Ponta Grossa, Estado do Paraná, durante o período de agosto a dezembro de 2003, procurando dar ênfase à contribuição das chuvas e dos fluxos ascendentes de água das camadas mais profundas do solo nesse consumo. Os resultados mostraram que no período monitorado: (a a lâmina média diária de água evapotranspirada pela cultura do trigo foi de 6,75 mm, com o fluxo ascendente de água no perfil de solo contribuindo com 62 % desse total; (b as taxas de evapotranspiração estimadas pelo método de Penman e pela equação do balanço hídrico (pedológico se transladaram no tempo com simetria aproximadamente igual, mas com defasagem aproximada de sete dias, como se o solo respondesse às variações impostas pela atmosfera cerca de uma semana depois; (c as chuvas tiveram efeito importante no armazenamento de água no solo, contribuindo para elevação das taxas evapotranspirativas; e (d pelo fato de o potencial mátrico médio na zona das raízes ter-se apresentado próximo do limite crítico para a cultura, concluiu-se que a irrigação poderia produzir impactos potencialmente positivos para a cultura, por disponibilizar mais água no solo e garantir níveis evapotranspirativos mais altos, como é agronomicamente desejável.The daily water consumption of a wheat culture was quantified on an Oxisoil using atmospheric and soil data measured automatically on an experimental farm in Ponta Grossa, Paraná, Brazil. The measurement period was August through December 2003. The rain contribution to soil moisture and the vertical upward movement of water within the soil were particularly emphasized. Our results show that in the evaluated period (a wheat evapotranspiration amounted to 6.75 mm a day, to which upward water flux contributed with 62

  16. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  17. Parallel imaging microfluidic cytometer.

    Science.gov (United States)

    Ehrlich, Daniel J; McKenna, Brian K; Evans, James G; Belkina, Anna C; Denis, Gerald V; Sherr, David H; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of fluorescence-activated flow cytometry (FCM) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity, and (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in ∼6-10 min, about 30 times the speed of most current FCM systems. In 1D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times for the sample throughput of charge-coupled device (CCD)-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. On-Site School Library Automation: Automation Anywhere with Laptops.

    Science.gov (United States)

    Gunn, Holly; Oxner, June

    2000-01-01

    Four years after the Halifax Regional School Board was formed through amalgamation, over 75% of its school libraries were automated. On-site automation with laptops was a quicker, more efficient way of automating than sending a shelf list to the Technical Services Department. The Eastern Shore School Library Automation Project was a successful…

  19. Automated electron microprobe

    International Nuclear Information System (INIS)

    Thompson, K.A.; Walker, L.R.

    1986-01-01

    The Plant Laboratory at the Oak Ridge Y-12 Plant has recently obtained a Cameca MBX electron microprobe with a Tracor Northern TN5500 automation system. This allows full stage and spectrometer automation and digital beam control. The capabilities of the system include qualitative and quantitative elemental microanalysis for all elements above and including boron in atomic number, high- and low-magnification imaging and processing, elemental mapping and enhancement, and particle size, shape, and composition analyses. Very low magnification, quantitative elemental mapping using stage control (which is of particular interest) has been accomplished along with automated size, shape, and composition analysis over a large relative area

  20. Operational proof of automation

    International Nuclear Information System (INIS)

    Jaerschky, R.; Reifenhaeuser, R.; Schlicht, K.

    1976-01-01

    Automation of the power plant process may imply quite a number of problems. The automation of dynamic operations requires complicated programmes often interfering in several branched areas. This reduces clarity for the operating and maintenance staff, whilst increasing the possibilities of errors. The synthesis and the organization of standardized equipment have proved very successful. The possibilities offered by this kind of automation for improving the operation of power plants will only sufficiently and correctly be turned to profit, however, if the application of these technics of equipment is further improved and if its volume is tallied with a definite etc. (orig.) [de

  1. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  2. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  3. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    International Nuclear Information System (INIS)

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes

  4. Parallel Framework for Cooperative Processes

    Directory of Open Access Journals (Sweden)

    Mitică Craus

    2005-01-01

    Full Text Available This paper describes the work of an object oriented framework designed to be used in the parallelization of a set of related algorithms. The idea behind the system we are describing is to have a re-usable framework for running several sequential algorithms in a parallel environment. The algorithms that the framework can be used with have several things in common: they have to run in cycles and the work should be possible to be split between several "processing units". The parallel framework uses the message-passing communication paradigm and is organized as a master-slave system. Two applications are presented: an Ant Colony Optimization (ACO parallel algorithm for the Travelling Salesman Problem (TSP and an Image Processing (IP parallel algorithm for the Symmetrical Neighborhood Filter (SNF. The implementations of these applications by means of the parallel framework prove to have good performances: approximatively linear speedup and low communication cost.

  5. Automation Interface Design Development

    Data.gov (United States)

    National Aeronautics and Space Administration — Our research makes its contributions at two levels. At one level, we addressed the problems of interaction between humans and computers/automation in a particular...

  6. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  7. Fixed automated spray technology.

    Science.gov (United States)

    2011-04-19

    This research project evaluated the construction and performance of Boschungs Fixed Automated : Spray Technology (FAST) system. The FAST system automatically sprays de-icing material on : the bridge when icing conditions are about to occur. The FA...

  8. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  9. Automation synthesis modules review

    International Nuclear Information System (INIS)

    Boschi, S.; Lodi, F.; Malizia, C.; Cicoria, G.; Marengo, M.

    2013-01-01

    The introduction of 68 Ga labelled tracers has changed the diagnostic approach to neuroendocrine tumours and the availability of a reliable, long-lived 68 Ge/ 68 Ga generator has been at the bases of the development of 68 Ga radiopharmacy. The huge increase in clinical demand, the impact of regulatory issues and a careful radioprotection of the operators have boosted for extensive automation of the production process. The development of automated systems for 68 Ga radiochemistry, different engineering and software strategies and post-processing of the eluate were discussed along with impact of automation with regulations. - Highlights: ► Generators availability and robust chemistry boosted for the huge diffusion of 68Ga radiopharmaceuticals. ► Different technological approaches for 68Ga radiopharmaceuticals will be discussed. ► Generator eluate post processing and evolution to cassette based systems were the major issues in automation. ► Impact of regulations on the technological development will be also considered

  10. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  11. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  12. Automated Lattice Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  13. Automated ISMS control auditability

    OpenAIRE

    Suomu, Mikko

    2015-01-01

    This thesis focuses on researching a possible reference model for automated ISMS’s (Information Security Management System) technical control auditability. The main objective was to develop a generic framework for automated compliance status monitoring of the ISO27001:2013 standard which could be re‐used in any ISMS system. The framework was tested with Proof of Concept (PoC) empirical research in a test infrastructure which simulates the framework target deployment environment. To fulfi...

  14. Marketing automation supporting sales

    OpenAIRE

    Sandell, Niko

    2016-01-01

    The past couple of decades has been a time of major changes in marketing. Digitalization has become a permanent part of marketing and at the same time enabled efficient collection of data. Personalization and customization of content are playing a crucial role in marketing when new customers are acquired. This has also created a need for automation to facilitate the distribution of targeted content. As a result of successful marketing automation more information of the customers is gathered ...

  15. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  16. Automated lattice data generation

    Directory of Open Access Journals (Sweden)

    Ayyar Venkitesh

    2018-01-01

    Full Text Available The process of generating ensembles of gauge configurations (and measuring various observables over them can be tedious and error-prone when done “by hand”. In practice, most of this procedure can be automated with the use of a workflow manager. We discuss how this automation can be accomplished using Taxi, a minimal Python-based workflow manager built for generating lattice data. We present a case study demonstrating this technology.

  17. Interactive/automated method to count bacterial colonies

    OpenAIRE

    Monteiro, Fernando C.; Ribeiro, J.E.; Martins, Ramiro

    2016-01-01

    The number of colonies in a culture is counted to calculate the concentration of bacteria in the original broth; however, manual counting can be tedious, time-consuming and imprecise. Automation of colony counting has been of increasing interest for many decades, and these methods have been shown to be more consistent than manual counting. Significant limitations of many algorithms used in automated systems are their inability to recognize overlapping colonies as distinct and to count colonie...

  18. A Parallel Particle Swarm Optimizer

    National Research Council Canada - National Science Library

    Schutte, J. F; Fregly, B .J; Haftka, R. T; George, A. D

    2003-01-01

    .... Motivated by a computationally demanding biomechanical system identification problem, we introduce a parallel implementation of a stochastic population based global optimizer, the Particle Swarm...

  19. Track filter on the basis of a cellular automation

    International Nuclear Information System (INIS)

    Glazov, A.A.; Kisel', I.V.; Konotopskaya, E.V.; Ososkov, G.A.

    1991-01-01

    The filtering method for tracks in discrete detectors based on the cellular automation is described. Results of the application of this method to experimental data (the spectrometer ARES) are quite successful: threefold reduction of input information with data grouping according to their belonging to separate tracks. They lift up percentage of useful events, which simplifies and accelerates considerably their next recognition. The described cellular automation for track filtering can be successfully applied in parallel computers and also in on-line mode if hardware implementation is used. 21 refs.; 11 figs

  20. Parallelizing Monte Carlo with PMC

    International Nuclear Information System (INIS)

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described

  1. Parallel context-free languages

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The relation between the family of context-free languages and the family of parallel context-free languages is examined in this paper. It is proved that the families are incomparable. Finally we prove that the family of languages of finite index is contained in the family of parallel context...

  2. Parallel pseudospectral domain decomposition techniques

    Science.gov (United States)

    Gottlieb, David; Hirsch, Richard S.

    1989-01-01

    The influence of interface boundary conditions on the ability to parallelize pseudospectral multidomain algorithms is investigated. Using the properties of spectral expansions, a novel parallel two domain procedure is generalized to an arbitrary number of domains each of which can be solved on a separate processor. This interface boundary condition considerably simplifies influence matrix techniques.

  3. Parallelization of the FLAPW method

    International Nuclear Information System (INIS)

    Canning, A.; Mannstadt, W.; Freeman, A.J.

    1999-01-01

    The FLAPW (full-potential linearized-augmented plane-wave) method is one of the most accurate first-principles methods for determining electronic and magnetic properties of crystals and surfaces. Until the present work, the FLAPW method has been limited to systems of less than about one hundred atoms due to a lack of an efficient parallel implementation to exploit the power and memory of parallel computers. In this work we present an efficient parallelization of the method by division among the processors of the plane-wave components for each state. The code is also optimized for RISC (reduced instruction set computer) architectures, such as those found on most parallel computers, making full use of BLAS (basic linear algebra subprograms) wherever possible. Scaling results are presented for systems of up to 686 silicon atoms and 343 palladium atoms per unit cell, running on up to 512 processors on a CRAY T3E parallel computer

  4. Mass culture of photobacteria to obtain luciferase

    Science.gov (United States)

    Chappelle, E. W.; Picciolo, G. L.; Rich, E., Jr.

    1969-01-01

    Inoculating preheated trays containing nutrient agar with photobacteria provides a means for mass culture of aerobic microorganisms in order to obtain large quantities of luciferase. To determine optimum harvest time, growth can be monitored by automated light-detection instrumentation.

  5. A new English–Arabic parallel text corpus for lexicographic ...

    African Journals Online (AJOL)

    ... drama, etc. Their Arabic translations were taken from The World of Knowledge series published by the National Council for Culture, Arts and Letters (NCCAL) in Kuwait. Keywords: parallel corpus, lexicography, translation, bilingual dictionary, collocations, alignment, synonyms, derivatives, antonyms, glossary, frequency ...

  6. a study of parallel proverbs in akan (twi)

    African Journals Online (AJOL)

    In Akan and Kiswahili, there are several proverbs that express the same underlying idea, oftentimes in the exact same or similar ways. There are several possible reasons why these parallel proverbs exist. In one line of thinking, the similarities may be due to contact phenomena facilitating shared cultural and/or historical ...

  7. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  8. One-step sample preparation of positive blood cultures for the direct detection of methicillin-sensitive and -resistant Staphylococcus aureus and methicillin-resistant coagulase-negative staphylococci within one hour using the automated GenomEra CDX™ PCR system.

    Science.gov (United States)

    Hirvonen, J J; von Lode, P; Nevalainen, M; Rantakokko-Jalava, K; Kaukoranta, S-S

    2012-10-01

    A method for the rapid detection of methicillin-sensitive and -resistant Staphylococcus aureus (MSSA and MRSA, respectively) and methicillin-resistant coagulase-negative staphylococci (MRCoNS) with a straightforward sample preparation protocol of blood cultures using an automated homogeneous polymerase chain reaction (PCR) assay, the GenomEra™ MRSA/SA (Abacus Diagnostica Oy, Turku, Finland), is presented. In total, 316 BacT/Alert (bioMérieux, Marcy l'Etoile, France) and 433 BACTEC (Becton Dickinson, Sparks, MD, USA) blood culture bottles were analyzed, including 725 positive cultures containing Gram-positive cocci in clusters (n = 419) and other Gram stain forms (n = 361), as well as 24 signal- and growth-negative bottles. Detection sensitivities for MSSA, MRSA, and MRCoNS were 99.4 % (158/159), 100.0 % (9/9), and 99.3 % (132/133), respectively. One false-positive MRSA result was detected from a non-staphylococci-containing bottle, yielding a specificity of 99.8 %. The lowest detectable amount of viable cells in the blood culture sample was 4 × 10(4) CFU/mL. The results were available within one hour after microbial growth detection and the two-step, time-resolved fluorometric (TRF) measurement mode employed by the GenomEra CDX™ instrument showed no interference from blood, charcoal, or culture media. The method described lacks all sample purification steps and allows reliable and simplified pathogen detection also in clinical microbiology laboratory settings without specialized molecular microbiology competence.

  9. Template based parallel checkpointing in a massively parallel computer system

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  10. 76 FR 16032 - 60-Day Notice of Proposed Information Collection: Bureau of Educational and Cultural Affairs...

    Science.gov (United States)

    2011-03-22

    ... of Educational and Cultural Affairs, Office of Policy and Evaluation, Evaluation Division: Sports... Cultural Affairs, Office of Policy and Evaluation, Evaluation Division: Sports & Culture Evaluation IWP... the use of automated collection techniques or other forms of technology. Abstract of proposed...

  11. 76 FR 16030 - 60-Day Notice of Proposed Information Collection: Bureau of Educational and Cultural Affairs...

    Science.gov (United States)

    2011-03-22

    ... of Educational and Cultural Affairs, Office of Policy and Evaluation, Evaluation Division: Sports... and Cultural Affairs, Office of Policy and Evaluation, Evaluation Division: Sports & Culture... the use of automated collection techniques or other forms of technology. Abstract of proposed...

  12. Automating the CMS DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  13. Control and automation systems

    International Nuclear Information System (INIS)

    Schmidt, R.; Zillich, H.

    1986-01-01

    A survey is given of the development of control and automation systems for energy uses. General remarks about control and automation schemes are followed by a description of modern process control systems along with process control processes as such. After discussing the particular process control requirements of nuclear power plants the paper deals with the reliability and availability of process control systems and refers to computerized simulation processes. The subsequent paragraphs are dedicated to descriptions of the operating floor, ergonomic conditions, existing systems, flue gas desulfurization systems, the electromagnetic influences on digital circuits as well as of light wave uses. (HAG) [de

  14. Automated nuclear materials accounting

    International Nuclear Information System (INIS)

    Pacak, P.; Moravec, J.

    1982-01-01

    An automated state system of accounting for nuclear materials data was established in Czechoslovakia in 1979. A file was compiled of 12 programs in the PL/1 language. The file is divided into four groups according to logical associations, namely programs for data input and checking, programs for handling the basic data file, programs for report outputs in the form of worksheets and magnetic tape records, and programs for book inventory listing, document inventory handling and materials balance listing. A similar automated system of nuclear fuel inventory for a light water reactor was introduced for internal purposes in the Institute of Nuclear Research (UJV). (H.S.)

  15. Balanced, parallel operation of flashlamps

    International Nuclear Information System (INIS)

    Carder, B.M.; Merritt, B.T.

    1979-01-01

    A new energy store, the Compensated Pulsed Alternator (CPA), promises to be a cost effective substitute for capacitors to drive flashlamps that pump large Nd:glass lasers. Because the CPA is large and discrete, it will be necessary that it drive many parallel flashlamp circuits, presenting a problem in equal current distribution. Current division to +- 20% between parallel flashlamps has been achieved, but this is marginal for laser pumping. A method is presented here that provides equal current sharing to about 1%, and it includes fused protection against short circuit faults. The method was tested with eight parallel circuits, including both open-circuit and short-circuit fault tests

  16. Parallel performance of a preconditioned CG solver for unstructured finite element applications

    Energy Technology Data Exchange (ETDEWEB)

    Shadid, J.N.; Hutchinson, S.A.; Moffat, H.K. [Sandia National Labs., Albuquerque, NM (United States)

    1994-12-31

    A parallel unstructured finite element (FE) implementation designed for message passing MIMD machines is described. This implementation employs automated problem partitioning algorithms for load balancing unstructured grids, a distributed sparse matrix representation of the global finite element equations and a parallel conjugate gradient (CG) solver. In this paper a number of issues related to the efficient implementation of parallel unstructured mesh applications are presented. These include the differences between structured and unstructured mesh parallel applications, major communication kernels for unstructured CG solvers, automatic mesh partitioning algorithms, and the influence of mesh partitioning metrics on parallel performance. Initial results are presented for example finite element (FE) heat transfer analysis applications on a 1024 processor nCUBE 2 hypercube. Results indicate over 95% scaled efficiencies are obtained for some large problems despite the required unstructured data communication.

  17. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  18. Idaho: Library Automation and Connectivity.

    Science.gov (United States)

    Bolles, Charles

    1996-01-01

    Provides an overview of the development of cooperative library automation and connectivity in Idaho, including telecommunications capacity, library networks, the Internet, and the role of the state library. Information on six shared automation systems in Idaho is included. (LRW)

  19. Application of parallel processing for automatic inspection of printed circuits

    International Nuclear Information System (INIS)

    Lougheed, R.M.

    1986-01-01

    Automated visual inspection of printed electronic circuits is a challenging application for image processing systems. Detailed inspection requires high speed analysis of gray scale imagery along with high quality optics, lighting, and sensing equipment. A prototype system has been developed and demonstrated at the Environmental Research Institute of Michigan (ERIM) for inspection of multilayer thick-film circuits. The central problem of real-time image processing is solved by a special-purpose parallel processor which includes a new high-speed Cytocomputer. In this chapter the inspection process and the algorithms used are summarized, along with the functional requirements of the machine vision system. Next, the parallel processor is described in detail and then performance on this application is given

  20. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  1. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  2. Metal structures with parallel pores

    Science.gov (United States)

    Sherfey, J. M.

    1976-01-01

    Four methods of fabricating metal plates having uniformly sized parallel pores are studied: elongate bundle, wind and sinter, extrude and sinter, and corrugate stack. Such plates are suitable for electrodes for electrochemical and fuel cells.

  3. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  4. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    The underlying neural mechanisms of a perceptual bias for in-phase bimanual coordination movements are not well understood. In the present study, we measured brain activity with functional magnetic resonance imaging in healthy subjects during a task, where subjects performed bimanual index finger...... adduction-abduction movements symmetrically or in parallel with real-time congruent or incongruent visual feedback of the movements. One network, consisting of bilateral superior and middle frontal gyrus and supplementary motor area (SMA), was more active when subjects performed parallel movements, whereas...... a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...

  5. High-Throughput Automation in Chemical Process Development.

    Science.gov (United States)

    Selekman, Joshua A; Qiu, Jun; Tran, Kristy; Stevens, Jason; Rosso, Victor; Simmons, Eric; Xiao, Yi; Janey, Jacob

    2017-06-07

    High-throughput (HT) techniques built upon laboratory automation technology and coupled to statistical experimental design and parallel experimentation have enabled the acceleration of chemical process development across multiple industries. HT technologies are often applied to interrogate wide, often multidimensional experimental spaces to inform the design and optimization of any number of unit operations that chemical engineers use in process development. In this review, we outline the evolution of HT technology and provide a comprehensive overview of how HT automation is used throughout different industries, with a particular focus on chemical and pharmaceutical process development. In addition, we highlight the common strategies of how HT automation is incorporated into routine development activities to maximize its impact in various academic and industrial settings.

  6. Parallel Processing and Scientific Applications

    Science.gov (United States)

    1992-11-30

    the algorithm have computational complexity O(N) and be scalable. Multigrid Methods Among the known scalable algorith.ms for elliptic PDE solution, the...close to linear in N and scales linearly in P. However multigrid methods have a particular difficulty on parallel machines in that they do not have...This inherent disadvantage of multigrid methods has led to the search for truly parallel MG methods - multigrid methods that can utilize all processors

  7. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  8. A Survey of Parallel A*

    OpenAIRE

    Fukunaga, Alex; Botea, Adi; Jinnai, Yuu; Kishimoto, Akihiro

    2017-01-01

    A* is a best-first search algorithm for finding optimal-cost paths in graphs. A* benefits significantly from parallelism because in many applications, A* is limited by memory usage, so distributed memory implementations of A* that use all of the aggregate memory on the cluster enable problems that can not be solved by serial, single-machine implementations to be solved. We survey approaches to parallel A*, focusing on decentralized approaches to A* which partition the state space among proces...

  9. Automated HAZOP revisited

    DEFF Research Database (Denmark)

    Taylor, J. R.

    2017-01-01

    Hazard and operability analysis (HAZOP) has developed from a tentative approach to hazard identification for process plants in the early 1970s to an almost universally accepted approach today, and a central technique of safety engineering. Techniques for automated HAZOP analysis were developed...

  10. Automated data model evaluation

    International Nuclear Information System (INIS)

    Kazi, Zoltan; Kazi, Ljubica; Radulovic, Biljana

    2012-01-01

    Modeling process is essential phase within information systems development and implementation. This paper presents methods and techniques for analysis and evaluation of data model correctness. Recent methodologies and development results regarding automation of the process of model correctness analysis and relations with ontology tools has been presented. Key words: Database modeling, Data model correctness, Evaluation

  11. Automated Vehicle Monitoring System

    OpenAIRE

    Wibowo, Agustinus Deddy Arief; Heriansyah, Rudi

    2014-01-01

    An automated vehicle monitoring system is proposed in this paper. The surveillance system is based on image processing techniques such as background subtraction, colour balancing, chain code based shape detection, and blob. The proposed system will detect any human's head as appeared at the side mirrors. The detected head will be tracked and recorded for further action.

  12. Automated Accounting. Instructor Guide.

    Science.gov (United States)

    Moses, Duane R.

    This curriculum guide was developed to assist business instructors using Dac Easy Accounting College Edition Version 2.0 software in their accounting programs. The module consists of four units containing assignment sheets and job sheets designed to enable students to master competencies identified in the area of automated accounting. The first…

  13. Mechatronic Design Automation

    DEFF Research Database (Denmark)

    Fan, Zhun

    successfully design analogue filters, vibration absorbers, micro-electro-mechanical systems, and vehicle suspension systems, all in an automatic or semi-automatic way. It also investigates the very important issue of co-designing plant-structures and dynamic controllers in automated design of Mechatronic...

  14. Protokoller til Home Automation

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk

    2008-01-01

    computer, der kan skifte mellem foruddefinerede indstillinger. Nogle gange kan computeren fjernstyres over internettet, så man kan se hjemmets status fra en computer eller måske endda fra en mobiltelefon. Mens nævnte anvendelser er klassiske indenfor home automation, er yderligere funktionalitet dukket op...

  15. Automated Water Extraction Index

    DEFF Research Database (Denmark)

    Feyisa, Gudina Legese; Meilby, Henrik; Fensholt, Rasmus

    2014-01-01

    of various sorts of environmental noise and at the same time offers a stable threshold value. Thus we introduced a new Automated Water Extraction Index (AWEI) improving classification accuracy in areas that include shadow and dark surfaces that other classification methods often fail to classify correctly...

  16. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2015-01-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  17. Driver Psychology during Automated Platooning

    NARCIS (Netherlands)

    Heikoop, D.D.

    2017-01-01

    With the rapid increase in vehicle automation technology, the call for understanding how humans behave while driving in an automated vehicle becomes more urgent. Vehicles that have automated systems such as Lane Keeping Assist (LKA) or Adaptive Cruise Control (ACC) not only support drivers in their

  18. Exploiting Symmetry on Parallel Architectures.

    Science.gov (United States)

    Stiller, Lewis Benjamin

    1995-01-01

    This thesis describes techniques for the design of parallel programs that solve well-structured problems with inherent symmetry. Part I demonstrates the reduction of such problems to generalized matrix multiplication by a group-equivariant matrix. Fast techniques for this multiplication are described, including factorization, orbit decomposition, and Fourier transforms over finite groups. Our algorithms entail interaction between two symmetry groups: one arising at the software level from the problem's symmetry and the other arising at the hardware level from the processors' communication network. Part II illustrates the applicability of our symmetry -exploitation techniques by presenting a series of case studies of the design and implementation of parallel programs. First, a parallel program that solves chess endgames by factorization of an associated dihedral group-equivariant matrix is described. This code runs faster than previous serial programs, and discovered it a number of results. Second, parallel algorithms for Fourier transforms for finite groups are developed, and preliminary parallel implementations for group transforms of dihedral and of symmetric groups are described. Applications in learning, vision, pattern recognition, and statistics are proposed. Third, parallel implementations solving several computational science problems are described, including the direct n-body problem, convolutions arising from molecular biology, and some communication primitives such as broadcast and reduce. Some of our implementations ran orders of magnitude faster than previous techniques, and were used in the investigation of various physical phenomena.

  19. Writing parallel programs that work

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Serial algorithms typically run inefficiently on parallel machines. This may sound like an obvious statement, but it is the root cause of why parallel programming is considered to be difficult. The current state of the computer industry is still that almost all programs in existence are serial. This talk will describe the techniques used in the Intel Parallel Studio to provide a developer with the tools necessary to understand the behaviors and limitations of the existing serial programs. Once the limitations are known the developer can refactor the algorithms and reanalyze the resulting programs with the tools in the Intel Parallel Studio to create parallel programs that work. About the speaker Paul Petersen is a Sr. Principal Engineer in the Software and Solutions Group (SSG) at Intel. He received a Ph.D. degree in Computer Science from the University of Illinois in 1993. After UIUC, he was employed at Kuck and Associates, Inc. (KAI) working on auto-parallelizing compiler (KAP), and was involved in th...

  20. Parallel algorithms for continuum dynamics

    International Nuclear Information System (INIS)

    Hicks, D.L.; Liebrock, L.M.

    1987-01-01

    Simply porting existing parallel programs to a new parallel processor may not achieve the full speedup possible; to achieve the maximum efficiency may require redesigning the parallel algorithms for the specific architecture. The authors discuss here parallel algorithms that were developed first for the HEP processor and then ported to the CRAY X-MP/4, the ELXSI/10, and the Intel iPSC/32. Focus is mainly on the most recent parallel processing results produced, i.e., those on the Intel Hypercube. The applications are simulations of continuum dynamics in which the momentum and stress gradients are important. Examples of these are inertial confinement fusion experiments, severe breaks in the coolant system of a reactor, weapons physics, shock-wave physics. Speedup efficiencies on the Intel iPSC Hypercube are very sensitive to the ratio of communication to computation. Great care must be taken in designing algorithms for this machine to avoid global communication. This is much more critical on the iPSC than it was on the three previous parallel processors

  1. Ethics, evolution and culture.

    Science.gov (United States)

    Mesoudi, Alex; Danielson, Peter

    2008-08-01

    Recent work in the fields of evolutionary ethics and moral psychology appears to be converging on a single empirically- and evolutionary-based science of morality or ethics. To date, however, these fields have failed to provide an adequate conceptualisation of how culture affects the content and distribution of moral norms. This is particularly important for a large class of moral norms relating to rapidly changing technological or social environments, such as norms regarding the acceptability of genetically modified organisms. Here we suggest that a science of morality/ethics can benefit from adopting a cultural evolution or gene-culture coevolution approach, which treats culture as a second, separate evolutionary system that acts in parallel to biological/genetic evolution. This cultural evolution approach brings with it a set of established theoretical concepts (e.g. different cultural transmission mechanisms) and empirical methods (e.g. evolutionary game theory) that can significantly improve our understanding of human morality.

  2. Parallel optimization methods for agile manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Meza, J.C.; Moen, C.D.; Plantenga, T.D.; Spence, P.A.; Tong, C.H. [Sandia National Labs., Livermore, CA (United States); Hendrickson, B.A.; Leland, R.W.; Reese, G.M. [Sandia National Labs., Albuquerque, NM (United States)

    1997-08-01

    The rapid and optimal design of new goods is essential for meeting national objectives in advanced manufacturing. Currently almost all manufacturing procedures involve the determination of some optimal design parameters. This process is iterative in nature and because it is usually done manually it can be expensive and time consuming. This report describes the results of an LDRD, the goal of which was to develop optimization algorithms and software tools that will enable automated design thereby allowing for agile manufacturing. Although the design processes vary across industries, many of the mathematical characteristics of the problems are the same, including large-scale, noisy, and non-differentiable functions with nonlinear constraints. This report describes the development of a common set of optimization tools using object-oriented programming techniques that can be applied to these types of problems. The authors give examples of several applications that are representative of design problems including an inverse scattering problem, a vibration isolation problem, a system identification problem for the correlation of finite element models with test data and the control of a chemical vapor deposition reactor furnace. Because the function evaluations are computationally expensive, they emphasize algorithms that can be adapted to parallel computers.

  3. Parallel Algorithm for Adaptive Numerical Integration

    International Nuclear Information System (INIS)

    Sujatmiko, M.; Basarudin, T.

    1997-01-01

    This paper presents an automation algorithm for integration using adaptive trapezoidal method. The interval is adaptively divided where the width of sub interval are different and fit to the behavior of its function. For a function f, an integration on interval [a,b] can be obtained, with maximum tolerance ε, using estimation (f, a, b, ε). The estimated solution is valid if the error is still in a reasonable range, fulfil certain criteria. If the error is big, however, the problem is solved by dividing it into to similar and independent sub problem on to separate [a, (a+b)/2] and [(a+b)/2, b] interval, i. e. ( f, a, (a+b)/2, ε/2) and (f, (a+b)/2, b, ε/2) estimations. The problems are solved in two different kinds of processor, root processor and worker processor. Root processor function ti divide a main problem into sub problems and distribute them to worker processor. The division mechanism may go further until all of the sub problem are resolved. The solution of each sub problem is then submitted to the root processor such that the solution for the main problem can be obtained. The algorithm is implemented on C-programming-base distributed computer networking system under parallel virtual machine platform

  4. Interpreting the Data: Parallel Analysis with Sawzall

    Directory of Open Access Journals (Sweden)

    Rob Pike

    2005-01-01

    Full Text Available Very large data sets often have a flat but regular structure and span multiple disks and machines. Examples include telephone call records, network logs, and web document repositories. These large data sets are not amenable to study using traditional database techniques, if only because they can be too large to fit in a single relational database. On the other hand, many of the analyses done on them can be expressed using simple, easily distributed computations: filtering, aggregation, extraction of statistics, and so on. We present a system for automating such analyses. A filtering phase, in which a query is expressed using a new procedural programming language, emits data to an aggregation phase. Both phases are distributed over hundreds or even thousands of computers. The results are then collated and saved to a file. The design – including the separation into two phases, the form of the programming language, and the properties of the aggregators – exploits the parallelism inherent in having data and computation distributed across many machines.

  5. Parallel Algorithms for Model Checking

    NARCIS (Netherlands)

    van de Pol, Jaco; Mousavi, Mohammad Reza; Sgall, Jiri

    2017-01-01

    Model checking is an automated verification procedure, which checks that a model of a system satisfies certain properties. These properties are typically expressed in some temporal logic, like LTL and CTL. Algorithms for LTL model checking (linear time logic) are based on automata theory and graph

  6. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  7. A Parallel Approach to Fractal Image Compression

    OpenAIRE

    Lubomir Dedera

    2004-01-01

    The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  8. Parallel Implicit Algorithms for CFD

    Science.gov (United States)

    Keyes, David E.

    1998-01-01

    The main goal of this project was efficient distributed parallel and workstation cluster implementations of Newton-Krylov-Schwarz (NKS) solvers for implicit Computational Fluid Dynamics (CFD.) "Newton" refers to a quadratically convergent nonlinear iteration using gradient information based on the true residual, "Krylov" to an inner linear iteration that accesses the Jacobian matrix only through highly parallelizable sparse matrix-vector products, and "Schwarz" to a domain decomposition form of preconditioning the inner Krylov iterations with primarily neighbor-only exchange of data between the processors. Prior experience has established that Newton-Krylov methods are competitive solvers in the CFD context and that Krylov-Schwarz methods port well to distributed memory computers. The combination of the techniques into Newton-Krylov-Schwarz was implemented on 2D and 3D unstructured Euler codes on the parallel testbeds that used to be at LaRC and on several other parallel computers operated by other agencies or made available by the vendors. Early implementations were made directly in Massively Parallel Integration (MPI) with parallel solvers we adapted from legacy NASA codes and enhanced for full NKS functionality. Later implementations were made in the framework of the PETSC library from Argonne National Laboratory, which now includes pseudo-transient continuation Newton-Krylov-Schwarz solver capability (as a result of demands we made upon PETSC during our early porting experiences). A secondary project pursued with funding from this contract was parallel implicit solvers in acoustics, specifically in the Helmholtz formulation. A 2D acoustic inverse problem has been solved in parallel within the PETSC framework.

  9. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  10. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  11. Parallelizing Timed Petri Net simulations

    Science.gov (United States)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  12. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  13. Investigation of vinegar production using a novel shaken repeated batch culture system.

    Science.gov (United States)

    Schlepütz, Tino; Büchs, Jochen

    2013-01-01

    Nowadays, bioprocesses are developed or optimized on small scale. Also, vinegar industry is motivated to reinvestigate the established repeated batch fermentation process. As yet, there is no small-scale culture system for optimizing fermentation conditions for repeated batch bioprocesses. Thus, the aim of this study is to propose a new shaken culture system for parallel repeated batch vinegar fermentation. A new operation mode - the flushing repeated batch - was developed. Parallel repeated batch vinegar production could be established in shaken overflow vessels in a completely automated operation with only one pump per vessel. This flushing repeated batch was first theoretically investigated and then empirically tested. The ethanol concentration was online monitored during repeated batch fermentation by semiconductor gas sensors. It was shown that the switch from one ethanol substrate quality to different ethanol substrate qualities resulted in prolonged lag phases and durations of the first batches. In the subsequent batches the length of the fermentations decreased considerably. This decrease in the respective lag phases indicates an adaptation of the acetic acid bacteria mixed culture to the specific ethanol substrate quality. Consequently, flushing repeated batch fermentations on small scale are valuable for screening fermentation conditions and, thereby, improving industrial-scale bioprocesses such as vinegar production in terms of process robustness, stability, and productivity. Copyright © 2013 American Institute of Chemical Engineers.

  14. Automated campaign system

    Science.gov (United States)

    Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere

    2006-02-01

    To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.

  15. Rapid automated nuclear chemistry

    International Nuclear Information System (INIS)

    Meyer, R.A.

    1979-01-01

    Rapid Automated Nuclear Chemistry (RANC) can be thought of as the Z-separation of Neutron-rich Isotopes by Automated Methods. The range of RANC studies of fission and its products is large. In a sense, the studies can be categorized into various energy ranges from the highest where the fission process and particle emission are considered, to low energies where nuclear dynamics are being explored. This paper presents a table which gives examples of current research using RANC on fission and fission products. The remainder of this text is divided into three parts. The first contains a discussion of the chemical methods available for the fission product elements, the second describes the major techniques, and in the last section, examples of recent results are discussed as illustrations of the use of RANC

  16. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  17. Automated Assembly Center (AAC)

    Science.gov (United States)

    Stauffer, Robert J.

    1993-01-01

    The objectives of this project are as follows: to integrate advanced assembly and assembly support technology under a comprehensive architecture; to implement automated assembly technologies in the production of high-visibility DOD weapon systems; and to document the improved cost, quality, and lead time. This will enhance the production of DOD weapon systems by utilizing the latest commercially available technologies combined into a flexible system that will be able to readily incorporate new technologies as they emerge. Automated assembly encompasses the following areas: product data, process planning, information management policies and framework, three schema architecture, open systems communications, intelligent robots, flexible multi-ability end effectors, knowledge-based/expert systems, intelligent workstations, intelligent sensor systems, and PDES/PDDI data standards.

  18. Automated drawing generation system

    International Nuclear Information System (INIS)

    Yoshinaga, Toshiaki; Kawahata, Junichi; Yoshida, Naoto; Ono, Satoru

    1991-01-01

    Since automated CAD drawing generation systems still require human intervention, improvements were focussed on an interactive processing section (data input and correcting operation) which necessitates a vast amount of work. As a result, human intervention was eliminated, the original objective of a computerized system. This is the first step taken towards complete automation. The effects of development and commercialization of the system are as described below. (1) The interactive processing time required for generating drawings was improved. It was determined that introduction of the CAD system has reduced the time required for generating drawings. (2) The difference in skills between workers preparing drawings has been eliminated and the quality of drawings has been made uniform. (3) The extent of knowledge and experience demanded of workers has been reduced. (author)

  19. Automated breeder fuel fabrication

    International Nuclear Information System (INIS)

    Goldmann, L.H.; Frederickson, J.R.

    1983-01-01

    The objective of the Secure Automated Fabrication (SAF) Project is to develop remotely operated equipment for the processing and manufacturing of breeder reactor fuel pins. The SAF line will be installed in the Fuels and Materials Examination Facility (FMEF). The FMEF is presently under construction at the Department of Energy's (DOE) Hanford site near Richland, Washington, and is operated by the Westinghouse Hanford Company (WHC). The fabrication and support systems of the SAF line are designed for computer-controlled operation from a centralized control room. Remote and automated fuel fabriction operations will result in: reduced radiation exposure to workers; enhanced safeguards; improved product quality; near real-time accountability, and increased productivity. The present schedule calls for installation of SAF line equipment in the FMEF beginning in 1984, with qualifying runs starting in 1986 and production commencing in 1987. 5 figures

  20. Comparison of an automated pattern analysis machine vision time-lapse system with traditional endpoint measurements in the analysis of cell growth and cytotoxicity.

    Science.gov (United States)

    Toimela, Tarja; Tähti, Hanna; Ylikomi, Timo

    2008-07-01

    Machine vision is an application of computer vision. It both collects visual information and interprets the images. Although the machine obviously does not 'see' in the same sense that humans do, it is possible to acquire visual information and to create programmes to identify relevant image features in an effective and consistent manner. Machine vision is widely applied in industrial automation, but here we describe how we have used it to monitor and interpret data from cell cultures. The machine vision system used (Cell-IQ) consisted of an inbuilt atmosphere-controlled incubator, where cell culture plates were placed during the test. Artificial intelligence (AI) software, which uses machine vision technology, took care of the follow-up analysis of cellular morphological changes. Basic endpoint and staining methods to evaluate the condition of the cells, were conducted in parallel to the machine vision analysis. The results showed that the automated system for pattern analysis of morphological changes yielded comparable results to those obtained by conventional methods. The inbuilt software analysis offers a promising way of evaluating cell growth and various cell phases. The continuous follow-up and label-free analysis, as well as the possibility of measuring multiple parameters simultaneously from the same cell population, were major advantages of this system, as compared to conventional endpoint measurement methodology.

  1. Automated Instrumentation System Verification.

    Science.gov (United States)

    1983-04-01

    fUig JDma Entered) i. _-_J I ___________ UNCLASSI FI ED SECURITY CLASSIFICATION OF TIHIS PAGE(II7,m Daca Entod) 20. ABSTRACT (Continued). ) contain...automatic measurement should arise. 15 I "_......_______.....____,_.........____ _ ’ " AFWL-TR-82-137 11. TRADITIONAL PROCEDURES The necessity to measure data...measurement (Ref. 8). Finally, when the necessity for automation was recognized and funds were provided, the effort described in this report was started

  2. Cavendish Balance Automation

    Science.gov (United States)

    Thompson, Bryan

    2000-01-01

    This is the final report for a project carried out to modify a manual commercial Cavendish Balance for automated use in cryostat. The scope of this project was to modify an off-the-shelf manually operated Cavendish Balance to allow for automated operation for periods of hours or days in cryostat. The purpose of this modification was to allow the balance to be used in the study of effects of superconducting materials on the local gravitational field strength to determine if the strength of gravitational fields can be reduced. A Cavendish Balance was chosen because it is a fairly simple piece of equipment for measuring gravity, one the least accurately known and least understood physical constants. The principle activities that occurred under this purchase order were: (1) All the components necessary to hold and automate the Cavendish Balance in a cryostat were designed. Engineering drawings were made of custom parts to be fabricated, other off-the-shelf parts were procured; (2) Software was written in LabView to control the automation process via a stepper motor controller and stepper motor, and to collect data from the balance during testing; (3)Software was written to take the data collected from the Cavendish Balance and reduce it to give a value for the gravitational constant; (4) The components of the system were assembled and fitted to a cryostat. Also the LabView hardware including the control computer, stepper motor driver, data collection boards, and necessary cabling were assembled; and (5) The system was operated for a number of periods, data collected, and reduced to give an average value for the gravitational constant.

  3. Autonomy, Automation, and Systems

    Science.gov (United States)

    Turner, Philip R.

    1987-02-01

    Aerospace industry interest in autonomy and automation, given fresh impetus by the national goal of establishing a Space Station, is becoming a major item of research and technology development. The promise of new technology arising from research in Artificial Intelligence (AI) has focused much attention on its potential in autonomy and automation. These technologies can improve performance in autonomous control functions that involve planning, scheduling, and fault diagnosis of complex systems. There are, however, many aspects of system and subsystem design in an autonomous system that impact AI applications, but do not directly involve AI technology. Development of a system control architecture, establishment of an operating system within the design, providing command and sensory data collection features appropriate to automated operation, and the use of design analysis tools to support system engineering are specific examples of major design issues. Aspects such as these must also receive attention and technology development support if we are to implement complex autonomous systems within the realistic limitations of mass, power, cost, and available flight-qualified technology that are all-important to a flight project.

  4. Automation in biological crystallization.

    Science.gov (United States)

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  5. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  6. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  7. Parallel FFT using Eden Skeletons

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Lobachev, Oleg

    2009-01-01

    The paper investigates and compares skeleton-based Eden implementations of different FFT-algorithms on workstation clusters with distributed memory. Our experiments show that the basic divide-and-conquer versions suffer from an inherent input distribution and result collection problem. Advanced...... approaches like calculating FFT using a parallel map-and-transpose skeleton provide more flexibility to overcome these problems. Assuming a distributed access to input data and re-organising computation to return results in a distributed way improves the parallel runtime behaviour....

  8. Phaser.MRage: automated molecular replacement.

    Science.gov (United States)

    Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J; Oeffner, Robert D; Adams, Paul D; Read, Randy J

    2013-11-01

    Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement.

  9. Phaser.MRage: automated molecular replacement

    International Nuclear Information System (INIS)

    Bunkóczi, Gábor; Echols, Nathaniel; McCoy, Airlie J.; Oeffner, Robert D.; Adams, Paul D.; Read, Randy J.

    2013-01-01

    The functionality of the molecular-replacement pipeline phaser.MRage is introduced and illustrated with examples. Phaser.MRage is a molecular-replacement automation framework that implements a full model-generation workflow and provides several layers of model exploration to the user. It is designed to handle a large number of models and can distribute calculations efficiently onto parallel hardware. In addition, phaser.MRage can identify correct solutions and use this information to accelerate the search. Firstly, it can quickly score all alternative models of a component once a correct solution has been found. Secondly, it can perform extensive analysis of identified solutions to find protein assemblies and can employ assembled models for subsequent searches. Thirdly, it is able to use a priori assembly information (derived from, for example, homologues) to speculatively place and score molecules, thereby customizing the search procedure to a certain class of protein molecule (for example, antibodies) and incorporating additional biological information into molecular replacement

  10. Parallel computing: numerics, applications, and trends

    National Research Council Canada - National Science Library

    Trobec, Roman; Vajteršic, Marián; Zinterhof, Peter

    2009-01-01

    ... and/or distributed systems. The contributions to this book are focused on topics most concerned in the trends of today's parallel computing. These range from parallel algorithmics, programming, tools, network computing to future parallel computing. Particular attention is paid to parallel numerics: linear algebra, differential equations, numerica...

  11. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  12. [Falsified medicines in parallel trade].

    Science.gov (United States)

    Muckenfuß, Heide

    2017-11-01

    The number of falsified medicines on the German market has distinctly increased over the past few years. In particular, stolen pharmaceutical products, a form of falsified medicines, have increasingly been introduced into the legal supply chain via parallel trading. The reasons why parallel trading serves as a gateway for falsified medicines are most likely the complex supply chains and routes of transport. It is hardly possible for national authorities to trace the history of a medicinal product that was bought and sold by several intermediaries in different EU member states. In addition, the heterogeneous outward appearance of imported and relabelled pharmaceutical products facilitates the introduction of illegal products onto the market. Official batch release at the Paul-Ehrlich-Institut offers the possibility of checking some aspects that might provide an indication of a falsified medicine. In some circumstances, this may allow the identification of falsified medicines before they come onto the German market. However, this control is only possible for biomedicinal products that have not received a waiver regarding official batch release. For improved control of parallel trade, better networking among the EU member states would be beneficial. European-wide regulations, e. g., for disclosure of the complete supply chain, would help to minimise the risks of parallel trading and hinder the marketing of falsified medicines.

  13. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  14. Elongation Cutoff Technique: Parallel Performance

    Directory of Open Access Journals (Sweden)

    Jacek Korchowiec

    2008-01-01

    Full Text Available It is demonstrated that the elongation cutoff technique (ECT substantially speeds up thequantum-chemical calculation at Hartree-Fock (HF level of theory and is especially wellsuited for parallel performance. A comparison of ECT timings for water chains with thereference HF calculations is given. The analysis includes the overall CPU (central processingunit time and its most time consuming steps.

  15. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  16. Lightweight Specifications for Parallel Correctness

    Science.gov (United States)

    2012-12-05

    that typically no pro- grammer specification is needed. However, manually determining which reported races are benign and which are bugs can be time...marked by the pro- grammer , but no statement is enclosed with if(true∗) — i.e., the set S∗ is empty. We are given a set T of parallel execution traces, and

  17. The parallel adult education system

    DEFF Research Database (Denmark)

    Wahlgren, Bjarne

    2015-01-01

    for competence development. The Danish university educational system includes two parallel programs: a traditional academic track (candidatus) and an alternative practice-based track (master). The practice-based program was established in 2001 and organized as part time. The total program takes half the time...

  18. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  19. Parallel imaging with phase scrambling.

    Science.gov (United States)

    Zaitsev, Maxim; Schultz, Gerrit; Hennig, Juergen; Gruetter, Rolf; Gallichan, Daniel

    2015-04-01

    Most existing methods for accelerated parallel imaging in MRI require additional data, which are used to derive information about the sensitivity profile of each radiofrequency (RF) channel. In this work, a method is presented to avoid the acquisition of separate coil calibration data for accelerated Cartesian trajectories. Quadratic phase is imparted to the image to spread the signals in k-space (aka phase scrambling). By rewriting the Fourier transform as a convolution operation, a window can be introduced to the convolved chirp function, allowing a low-resolution image to be reconstructed from phase-scrambled data without prominent aliasing. This image (for each RF channel) can be used to derive coil sensitivities to drive existing parallel imaging techniques. As a proof of concept, the quadratic phase was applied by introducing an offset to the x(2) - y(2) shim and the data were reconstructed using adapted versions of the image space-based sensitivity encoding and GeneRalized Autocalibrating Partially Parallel Acquisitions algorithms. The method is demonstrated in a phantom (1 × 2, 1 × 3, and 2 × 2 acceleration) and in vivo (2 × 2 acceleration) using a 3D gradient echo acquisition. Phase scrambling can be used to perform parallel imaging acceleration without acquisition of separate coil calibration data, demonstrated here for a 3D-Cartesian trajectory. Further research is required to prove the applicability to other 2D and 3D sampling schemes. © 2014 Wiley Periodicals, Inc.

  20. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  1. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    This paper presents new evidence on trade‐induced automation in manufacturing firms using unique data combining a retrospective survey that we have assembled with register data for 2005‐2010. In particular, we establish a causal effect where firms that have specialized in product types for which...... the Chinese exports to the world market has risen sharply invest more in automated capital compared to firms that have specialized in other product types. We also study the relationship between automation and firm performance and find that firms with high increases in scale and scope of automation have faster...... productivity growth than other firms. Moreover, automation improves the efficiency of all stages of the production process by reducing setup time, run time, and inspection time and increasing uptime and quantity produced per worker. The efficiency improvement varies by type of automation....

  2. Automated landmark-guided deformable image registration

    International Nuclear Information System (INIS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. (paper)

  3. Automated landmark-guided deformable image registration.

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-07

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  4. An Approach to Office Automation

    OpenAIRE

    Ischenko, A.N.; Tumeo, M.A.

    1983-01-01

    In recent years, the increasing scale of production and degree of specialization within firms has led to a significant growth in the amount of information needed for their successful management. As a result, the use of computer systems (office automation) has become increasingly common. However, no manuals or set automation procedures exist to help organizations design and implement an efficient and effective office automation system. The goals of this paper are to outline some important...

  5. Embedded system for building automation

    OpenAIRE

    Rolih, Andrej

    2014-01-01

    Home automation is a fast developing field of computer science and electronics. Companies are offering many different products for home automation. Ranging anywhere from complete systems for building management and control, to simple smart lights that can be connected to the internet. These products offer the user greater living comfort and lower their expenses by reducing the energy usage. This thesis shows the development of a simple home automation system that focuses mainly on the enhance...

  6. Automated chair-training of rhesus macaques.

    Science.gov (United States)

    Ponce, C R; Genecin, M P; Perez-Melara, G; Livingstone, M S

    2016-04-01

    Neuroscience research on non-human primates usually requires the animals to sit in a chair. To do this, typically monkeys are fitted with collars and trained to enter the chairs using either a pole, leash and jump cage. Animals may initially show resistance and risk injury. We have developed an automated chair-training method that minimizes restraints to ease the animals into their chairs. We developed a method to automatically train animals to enter a primate chair and stick out their heads for neckplate placement. To do this, we fitted the chairs with Arduino microcontrollers coupled to a water-reward system and touch- and proximity sensors. We found that the animals responded well to the chair, partially entering the chair within hours, sitting inside the chair within days and allowing us to manually introduce a door and neck plate, all within 14-21 sessions. Although each session could last many hours, automation meant that actual training person-hours could be as little as half an hour per day. The biggest advantage was that animals showed little resistance to entering the chair, compared to monkeys trained by leash pulling. This automated chair-training method can take longer than the standard collar-and-leash approach, but multiple macaques can be trained in parallel with fewer person-hours. It is also a promising method for animal-use refinement and in our case, it was the only effective training approach for an animal suffering from a behavioral pathology. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Parallelization method for three dimensional MOC calculation

    International Nuclear Information System (INIS)

    Zhang Zhizhu; Li Qing; Wang Kan

    2013-01-01

    A parallelization method based on angular decomposition for the three dimensional MOC was designed. To improve the parallel efficiency, the directions were pre-grouped and the groups were assembled to minimize the communication. The improved parallelization method was applied to the three dimensional MOC code TCM. The numerical results show that the calculation results of parallelization method are agreed with serial calculation results. The parallel efficiency gets obvious increase after the communication optimized and load balance. (authors)

  8. World-wide distribution automation systems

    International Nuclear Information System (INIS)

    Devaney, T.M.

    1994-01-01

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems

  9. Contaminant analysis automation, an overview

    International Nuclear Information System (INIS)

    Hollen, R.; Ramos, O. Jr.

    1996-01-01

    To meet the environmental restoration and waste minimization goals of government and industry, several government laboratories, universities, and private companies have formed the Contaminant Analysis Automation (CAA) team. The goal of this consortium is to design and fabricate robotics systems that standardize and automate the hardware and software of the most common environmental chemical methods. In essence, the CAA team takes conventional, regulatory- approved (EPA Methods) chemical analysis processes and automates them. The automation consists of standard laboratory modules (SLMs) that perform the work in a much more efficient, accurate, and cost- effective manner

  10. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  11. Fossil power plant automation

    International Nuclear Information System (INIS)

    Divakaruni, S.M.; Touchton, G.

    1991-01-01

    This paper elaborates on issues facing the utilities industry and seeks to address how new computer-based control and automation technologies resulting from recent microprocessor evolution, can improve fossil plant operations and maintenance. This in turn can assist utilities to emerge stronger from the challenges ahead. Many presentations at the first ISA/EPRI co-sponsored conference are targeted towards improving the use of computer and control systems in the fossil and nuclear power plants and we believe this to be the right forum to share our ideas

  12. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  13. Representing and computing regular languages on massively parallel networks.

    Science.gov (United States)

    Miller, M I; Roysam, B; Smith, K R; O'Sullivan, J A

    1991-01-01

    A general method is proposed for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach establishes the formal connection of rules to Chomsky grammars and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibbs representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs probability law. This coupling yields the result that fully parallel stochastic cellular automata can be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determine the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  14. Representing and computing regular languages on massively parallel networks

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  15. Parallel peak pruning for scalable SMP contour tree computation

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Hamish A. [Univ. of Leeds (United Kingdom); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Sewell, Christopher M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahrens, James P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-09

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.

  16. Dynamic workload balancing of parallel applications with user-level scheduling on the Grid

    CERN Document Server

    Korkhov, Vladimir V; Krzhizhanovskaya, Valeria V

    2009-01-01

    This paper suggests a hybrid resource management approach for efficient parallel distributed computing on the Grid. It operates on both application and system levels, combining user-level job scheduling with dynamic workload balancing algorithm that automatically adapts a parallel application to the heterogeneous resources, based on the actual resource parameters and estimated requirements of the application. The hybrid environment and the algorithm for automated load balancing are described, the influence of resource heterogeneity level is measured, and the speedup achieved with this technique is demonstrated for different types of applications and resources.

  17. High-throughput automated system for statistical biosensing employing microcantilevers arrays

    DEFF Research Database (Denmark)

    Bosco, Filippo; Chen, Ching H.; Hwu, En T.

    2011-01-01

    In this paper we present a completely new and fully automated system for parallel microcantilever-based biosensing. Our platform is able to monitor simultaneously the change of resonance frequency (dynamic mode), of deflection (static mode), and of surface roughness of hundreds of cantilevers in ...

  18. The parallel diplomacy and the International Relations of the regions

    OpenAIRE

    Zeraoiu, Zidane

    2011-01-01

    The inclusion of subnational entities in international politics, breaks with the exclusive privilege of handling external relations by the States. Regions and municipalities have developed international policies that strengthen local affairs in cultural, economic, politic, security and strategic cooperation aspects, through the so-called parallel diplomacy activities. The influence of regional institutions in global affairs is growing; however paradiplomacy practice is not institutionalized a...

  19. INSITU ENUMERATION OF BACTERIAL ADHESION IN A PARALLEL PLATE FLOW CHAMBER - ELIMINATION OR IN FOCUS FLOWING BACTERIA FROM THE ANALYSIS

    NARCIS (Netherlands)

    MEINDERS, JM; VANDERMEI, HC; BUSSCHER, HJ

    1992-01-01

    Automated in situ enumeration using image analysis of bacterial adhesion to solid substrata in, e.g., a parallel plate flow chamber requires sophisticated methods to ensure that in-focus flowing bacteria are separated from the adhering ones and eliminated from the analysis. In this paper, three

  20. Parallel artificial liquid membrane extraction of psychoactive analytes: a novel approach in therapeutic drug monitoring.

    Science.gov (United States)

    Olsen, Katharina Norgren; Ask, Kristine Skoglund; Pedersen-Bjergaard, Stig; Gjelstad, Astrid

    2018-03-01

    Liquid-liquid extraction is widely used in therapeutic drug monitoring of antipsychotics, but difficulties in automation of the technique can result in long operational time. In this paper, parallel artificial liquid membrane extraction was used for extraction of serotonin- and serotonin-norepinephrine reuptake inhibitors from human plasma, and an approach to automate the technique was investigated. Eight model analytes were extracted from 125 μl human plasma with recoveries in the range 72-111% (relative standard deviation [RSD] ≤12.8%). A semiautomated pipettor was successfully utilized in the procedure, reducing the manual handling time. Real patient samples were analyzed with satisfying accuracy. A semiautomated extraction of serotonin-and serotonin-norepinephrine reuptake inhibitors by parallel artificial liquid membrane extraction extraction was successfully performed.

  1. Parallel Materialization of Large ABoxes.

    Science.gov (United States)

    Narayanan, Sivaramakrishnan; Catalyurek, Umit; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    This paper is concerned with the efficient computation of materialization in a knowledge base with a large ABox. We present a framework for performing this task on a shared-nothing parallel machine. The framework partitions TBox and ABox axioms using a min-min strategy. It utilizes an existing system, like SwiftOWLIM, to perform local inference computations and coordinates exchange of relevant information between processors. Our approach is able to exploit parallelism in the axioms of the TBox to achieve speedup in a cluster. However, this approach is limited by the complexity of the TBox. We present an experimental evaluation of the framework using datasets from the Lehigh University Benchmark (LUBM).

  2. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  3. Automated Test Case Generation

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  4. Automation from pictures

    International Nuclear Information System (INIS)

    Kozubal, A.J.

    1992-01-01

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. (author)

  5. AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT ...

    Science.gov (United States)

    The Automated Geospatial Watershed Assessment tool (AGWA) is a GIS interface jointly developed by the USDA Agricultural Research Service, the U.S. Environmental Protection Agency, the University of Arizona, and the University of Wyoming to automate the parameterization and execution of the Soil Water Assessment Tool (SWAT) and KINEmatic Runoff and EROSion (KINEROS2) hydrologic models. The application of these two models allows AGWA to conduct hydrologic modeling and watershed assessments at multiple temporal and spatial scales. AGWA’s current outputs are runoff (volumes and peaks) and sediment yield, plus nitrogen and phosphorus with the SWAT model. AGWA uses commonly available GIS data layers to fully parameterize, execute, and visualize results from both models. Through an intuitive interface the user selects an outlet from which AGWA delineates and discretizes the watershed using a Digital Elevation Model (DEM) based on the individual model requirements. The watershed model elements are then intersected with soils and land cover data layers to derive the requisite model input parameters. The chosen model is then executed, and the results are imported back into AGWA for visualization. This allows managers to identify potential problem areas where additional monitoring can be undertaken or mitigation activities can be focused. AGWA also has tools to apply an array of best management practices. There are currently two versions of AGWA available; AGWA 1.5 for

  6. Maneuver Automation Software

    Science.gov (United States)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  7. Streaming for Functional Data-Parallel Languages

    DEFF Research Database (Denmark)

    Madsen, Frederik Meisner

    In this thesis, we investigate streaming as a general solution to the space inefficiency commonly found in functional data-parallel programming languages. The data-parallel paradigm maps well to parallel SIMD-style hardware. However, the traditional fully materializing execution strategy......, and the limited memory in these architectures, severely constrains the data sets that can be processed. Moreover, the language-integrated cost semantics for nested data parallelism pioneered by NESL depends on a parallelism-flattening execution strategy that only exacerbates the problem. This is because...... by extending two existing data-parallel languages: NESL and Accelerate. In the extensions we map bulk operations to data-parallel streams that can evaluate fully sequential, fully parallel or anything in between. By a dataflow, piecewise parallel execution strategy, the runtime system can adjust to any target...

  8. Aspects of computation on asynchronous parallel processors

    International Nuclear Information System (INIS)

    Wright, M.

    1989-01-01

    The increasing availability of asynchronous parallel processors has provided opportunities for original and useful work in scientific computing. However, the field of parallel computing is still in a highly volatile state, and researchers display a wide range of opinion about many fundamental questions such as models of parallelism, approaches for detecting and analyzing parallelism of algorithms, and tools that allow software developers and users to make effective use of diverse forms of complex hardware. This volume collects the work of researchers specializing in different aspects of parallel computing, who met to discuss the framework and the mechanics of numerical computing. The far-reaching impact of high-performance asynchronous systems is reflected in the wide variety of topics, which include scientific applications (e.g. linear algebra, lattice gauge simulation, ordinary and partial differential equations), models of parallelism, parallel language features, task scheduling, automatic parallelization techniques, tools for algorithm development in parallel environments, and system design issues

  9. Fecal culture

    Science.gov (United States)

    ... parasites exam Alternative Names Stool culture; Culture - stool; Gastroenteritis fecal culture Images Salmonella typhi organism Yersinia enterocolitica organism Campylobacter jejuni organism Clostridium difficile organism References Beavis, KG, ...

  10. High-speed parallel counter

    International Nuclear Information System (INIS)

    Gus'kov, B.N.; Kalinnikov, V.A.; Krastev, V.R.; Maksimov, A.N.; Nikityuk, N.M.

    1985-01-01

    This paper describes a high-speed parallel counter that contains 31 inputs and 15 outputs and is implemented by integrated circuits of series 500. The counter is designed for fast sampling of events according to the number of particles that pass simultaneously through the hodoscopic plane of the detector. The minimum delay of the output signals relative to the input is 43 nsec. The duration of the output signals can be varied from 75 to 120 nsec

  11. An anthropologist in parallel structure

    Directory of Open Access Journals (Sweden)

    Noelle Molé Liston

    2016-08-01

    Full Text Available The essay examines the parallels between Molé Liston’s studies on labor and precarity in Italy and the United States’ anthropology job market. Probing the way economic shift reshaped the field of anthropology of Europe in the late 2000s, the piece explores how the neoliberalization of the American academy increased the value in studying the hardships and daily lives of non-western populations in Europe.

  12. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  13. Wakefield calculations on parallel computers

    International Nuclear Information System (INIS)

    Schoessow, P.

    1990-01-01

    The use of parallelism in the solution of wakefield problems is illustrated for two different computer architectures (SIMD and MIMD). Results are given for finite difference codes which have been implemented on a Connection Machine and an Alliant FX/8 and which are used to compute wakefields in dielectric loaded structures. Benchmarks on code performance are presented for both cases. 4 refs., 3 figs., 2 tabs

  14. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  15. A Demonstration of Automated DNA Sequencing.

    Science.gov (United States)

    Latourelle, Sandra; Seidel-Rogol, Bonnie

    1998-01-01

    Details a simulation that employs a paper-and-pencil model to demonstrate the principles behind automated DNA sequencing. Discusses the advantages of automated sequencing as well as the chemistry of automated DNA sequencing. (DDR)

  16. Parallel processing of genomics data

    Science.gov (United States)

    Agapito, Giuseppe; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-10-01

    The availability of high-throughput experimental platforms for the analysis of biological samples, such as mass spectrometry, microarrays and Next Generation Sequencing, have made possible to analyze a whole genome in a single experiment. Such platforms produce an enormous volume of data per single experiment, thus the analysis of this enormous flow of data poses several challenges in term of data storage, preprocessing, and analysis. To face those issues, efficient, possibly parallel, bioinformatics software needs to be used to preprocess and analyze data, for instance to highlight genetic variation associated with complex diseases. In this paper we present a parallel algorithm for the parallel preprocessing and statistical analysis of genomics data, able to face high dimension of data and resulting in good response time. The proposed system is able to find statistically significant biological markers able to discriminate classes of patients that respond to drugs in different ways. Experiments performed on real and synthetic genomic datasets show good speed-up and scalability.

  17. Opening up Library Automation Software

    Science.gov (United States)

    Breeding, Marshall

    2009-01-01

    Throughout the history of library automation, the author has seen a steady advancement toward more open systems. In the early days of library automation, when proprietary systems dominated, the need for standards was paramount since other means of inter-operability and data exchange weren't possible. Today's focus on Application Programming…

  18. Automated Power-Distribution System

    Science.gov (United States)

    Ashworth, Barry; Riedesel, Joel; Myers, Chris; Miller, William; Jones, Ellen F.; Freeman, Kenneth; Walsh, Richard; Walls, Bryan K.; Weeks, David J.; Bechtel, Robert T.

    1992-01-01

    Autonomous power-distribution system includes power-control equipment and automation equipment. System automatically schedules connection of power to loads and reconfigures itself when it detects fault. Potential terrestrial applications include optimization of consumption of power in homes, power supplies for autonomous land vehicles and vessels, and power supplies for automated industrial processes.

  19. Automation in Catholic College Libraries.

    Science.gov (United States)

    Stussy, Susan A.

    1981-01-01

    Reports on a 1980 survey of library automation in 105 Catholic colleges with collections containing less than 300,000 bibliographic items. The report indicates that network membership and grant funding were a vital part of library automation in the schools surveyed. (Author/LLS)

  20. Library Automation: A Year on.

    Science.gov (United States)

    Electronic Library, 1997

    1997-01-01

    A follow-up interview with librarians from Hong Kong, Mexico, Australia, Canada, and New Zealand about library automation systems in their libraries and their plans for the future. Discusses system performance, upgrades, services, resources, intranets, trends in automation, Web interfaces, full-text image/document systems, document delivery, OPACs…

  1. Library Automation: A Balanced View

    Science.gov (United States)

    Avram, Henriette

    1972-01-01

    Ellsworth Mason's two recently published papers, severely criticizing library automation, are refuted. While admitting to the failures and problems, this paper also presents the positive accomplishments in a brief evaluation of the status of library automation in 1971. (16 references) (Author/SJ)

  2. Library Automation: A Critical Review.

    Science.gov (United States)

    Overmyer, LaVahn

    This report has two main purposes: (1) To give an account of the use of automation in selected libraries throughout the country and in the development of networks; and (2) To discuss some of the fundamental considerations relevant to automation and the implications for library education, library research and the library profession. The first part…

  3. Automated methods of corrosion measurement

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov; Bech-Nielsen, Gregers; Reeve, John Ch

    1997-01-01

    to revise assumptions regarding the basis of the method, which sometimes leads to the discovery of as-yet unnoticed phenomena. The present selection of automated methods for corrosion measurements is not motivated simply by the fact that a certain measurement can be performed automatically. Automation...

  4. Automated Methods Of Corrosion Measurements

    DEFF Research Database (Denmark)

    Bech-Nielsen, Gregers; Andersen, Jens Enevold Thaulov; Reeve, John Ch

    1997-01-01

    The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell.......The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell....

  5. Automated Test-Form Generation

    Science.gov (United States)

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  6. A Survey of Automated Deduction

    OpenAIRE

    Bundy, Alan

    1999-01-01

    We survey research in the automation of deductive inference, from its beginnings in the early history of computing to the present day. We identify and describe the major areas of research interest and their applications. The area is characterised by its wide variety of proof methods, forms of automated deduction and applications.

  7. The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit

    Science.gov (United States)

    Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete; hide

    1998-01-01

    Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.

  8. Effects of advanced carbohydrate counting guided by an automated bolus calculator in Type 1 diabetes mellitus (StenoABC)

    DEFF Research Database (Denmark)

    Hommel, E; Schmidt, S; Vistisen, D

    2017-01-01

    AIMS: To test whether concomitant use of an automated bolus calculator for people with Type 1 diabetes carrying out advanced carbohydrate counting would induce further improvements in metabolic control. METHODS: We conducted a 12-month, randomized, parallel-group, open-label, single-centre, inves......AIMS: To test whether concomitant use of an automated bolus calculator for people with Type 1 diabetes carrying out advanced carbohydrate counting would induce further improvements in metabolic control. METHODS: We conducted a 12-month, randomized, parallel-group, open-label, single...... carbohydrate counting using mental calculations (MC group) or advanced carbohydrate counting using an automated bolus calculator (ABC group) during a 3.5-h group training course. For 12 months after training, participants attended a specialized diabetes centre quarterly. The primary outcome was change in HbA1c...... HbA1c reductions when guided by an automated bolus calculator (NCT02084498)....

  9. Continuous culture apparatus and methodology

    International Nuclear Information System (INIS)

    Conway, H.L.

    1975-01-01

    At present, we are investigating the sorption of potentially toxic trace elements by phytoplankton under controlled laboratory conditions. Continuous culture techniques were used to study the mechanism of the sorption of the trace elements by unialgal diatom populations and the factors influencing this sorption. Continuous culture methodology has been used extensively to study bacterial kinetics. It is an excellent technique for obtaining a known physiological state of phytoplankton populations. An automated method for the synthesis of continuous culture medium for use in these experiments is described

  10. The Science of Home Automation

    Science.gov (United States)

    Thomas, Brian Louis

    Smart home technologies and the concept of home automation have become more popular in recent years. This popularity has been accompanied by social acceptance of passive sensors installed throughout the home. The subsequent increase in smart homes facilitates the creation of home automation strategies. We believe that home automation strategies can be generated intelligently by utilizing smart home sensors and activity learning. In this dissertation, we hypothesize that home automation can benefit from activity awareness. To test this, we develop our activity-aware smart automation system, CARL (CASAS Activity-aware Resource Learning). CARL learns the associations between activities and device usage from historical data and utilizes the activity-aware capabilities to control the devices. To help validate CARL we deploy and test three different versions of the automation system in a real-world smart environment. To provide a foundation of activity learning, we integrate existing activity recognition and activity forecasting into CARL home automation. We also explore two alternatives to using human-labeled data to train the activity learning models. The first unsupervised method is Activity Detection, and the second is a modified DBSCAN algorithm that utilizes Dynamic Time Warping (DTW) as a distance metric. We compare the performance of activity learning with human-defined labels and with automatically-discovered activity categories. To provide evidence in support of our hypothesis, we evaluate CARL automation in a smart home testbed. Our results indicate that home automation can be boosted through activity awareness. We also find that the resulting automation has a high degree of usability and comfort for the smart home resident.

  11. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    Science.gov (United States)

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  12. Joint Experimentation on Scalable Parallel Processors (JESPP)

    National Research Council Canada - National Science Library

    Davis, Dan M; Lucas, Robert F; Yao, Ka-Thia; Wagenbrath, Gene

    2006-01-01

    ...) required expansion of its joint semi-automated forces (JSAF) code capabilities; including number of entities, behavior complexity, terrain resolution, infrastructure features, environmental realism, and analytical potential...

  13. Berkeley automated supernova search

    Energy Technology Data Exchange (ETDEWEB)

    Kare, J.T.; Pennypacker, C.R.; Muller, R.A.; Mast, T.S.; Crawford, F.S.; Burns, M.S.

    1981-01-01

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982.

  14. Berkeley automated supernova search

    International Nuclear Information System (INIS)

    Kare, J.T.; Pennypacker, C.R.; Muller, R.A.; Mast, T.S.

    1981-01-01

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982

  15. Automated attendance accounting system

    Science.gov (United States)

    Chapman, C. P. (Inventor)

    1973-01-01

    An automated accounting system useful for applying data to a computer from any or all of a multiplicity of data terminals is disclosed. The system essentially includes a preselected number of data terminals which are each adapted to convert data words of decimal form to another form, i.e., binary, usable with the computer. Each data terminal may take the form of a keyboard unit having a number of depressable buttons or switches corresponding to selected data digits and/or function digits. A bank of data buffers, one of which is associated with each data terminal, is provided as a temporary storage. Data from the terminals is applied to the data buffers on a digit by digit basis for transfer via a multiplexer to the computer.

  16. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  17. (No) Security in Automation!?

    CERN Document Server

    Lüders, S

    2008-01-01

    Modern Information Technologies like Ethernet, TCP/IP, web server or FTP are nowadays increas-ingly used in distributed control and automation systems. Thus, information from the factory floor is now directly available at the management level (From Shop-Floor to Top-Floor) and can be ma-nipulated from there. Despite the benefits coming with this (r)evolution, new vulnerabilities are in-herited, too: worms and viruses spread within seconds via Ethernet and attackers are becoming interested in control systems. Unfortunately, control systems lack the standard security features that usual office PCs have. This contribution will elaborate on these problems, discuss the vulnerabilities of modern control systems and present international initiatives for mitigation.

  18. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...

  19. Printing quality control automation

    Science.gov (United States)

    Trapeznikova, O. V.

    2018-04-01

    One of the most important problems in the concept of standardizing the process of offset printing is the control the quality rating of printing and its automation. To solve the problem, a software has been developed taking into account the specifics of printing system components and the behavior in printing process. In order to characterize the distribution of ink layer on the printed substrate the so-called deviation of the ink layer thickness on the sheet from nominal surface is suggested. The geometric data construction the surface projections of the color gamut bodies allows to visualize the color reproduction gamut of printing systems in brightness ranges and specific color sectors, that provides a qualitative comparison of the system by the reproduction of individual colors in a varying ranges of brightness.

  20. Automated electronic filter design

    CERN Document Server

    Banerjee, Amal

    2017-01-01

    This book describes a novel, efficient and powerful scheme for designing and evaluating the performance characteristics of any electronic filter designed with predefined specifications. The author explains techniques that enable readers to eliminate complicated manual, and thus error-prone and time-consuming, steps of traditional design techniques. The presentation includes demonstration of efficient automation, using an ANSI C language program, which accepts any filter design specification (e.g. Chebyschev low-pass filter, cut-off frequency, pass-band ripple etc.) as input and generates as output a SPICE(Simulation Program with Integrated Circuit Emphasis) format netlist. Readers then can use this netlist to run simulations with any version of the popular SPICE simulator, increasing accuracy of the final results, without violating any of the key principles of the traditional design scheme.

  1. Impact of Staff Organizational Culture on the Implementation of ...

    African Journals Online (AJOL)

    Abstract. This study surveyed the Impact of Library Staff Organizational Culture on the Implementation of. Automation in Libraries of Federal Universities in the North- East Zone of Nigeria. The objectives of the Study were to determine: the level of implementation of automation of libraries in Federal. Universities of the North- ...

  2. Impact of staff organizational culture on the implementation of ...

    African Journals Online (AJOL)

    This study surveyed the Impact of Library Staff Organizational Culture on the Implementation of Automation in Libraries of Federal Universities in the North- East Zone of Nigeria. The objectives of the Study were to determine: the level of implementation of automation of libraries in Federal Universities of the North- East Zone ...

  3. Automated Essay Scoring

    Directory of Open Access Journals (Sweden)

    Semire DIKLI

    2006-01-01

    Full Text Available Automated Essay Scoring Semire DIKLI Florida State University Tallahassee, FL, USA ABSTRACT The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES has revealed that computers have the capacity to function as a more effective cognitive tool (Attali, 2004. AES is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003. Revision and feedback are essential aspects of the writing process. Students need to receive feedback in order to increase their writing quality. However, responding to student papers can be a burden for teachers. Particularly if they have large number of students and if they assign frequent writing assignments, providing individual feedback to student essays might be quite time consuming. AES systems can be very useful because they can provide the student with a score as well as feedback within seconds (Page, 2003. Four types of AES systems, which are widely used by testing companies, universities, and public schools: Project Essay Grader (PEG, Intelligent Essay Assessor (IEA, E-rater, and IntelliMetric. AES is a developing technology. Many AES systems are used to overcome time, cost, and generalizability issues in writing assessment. The accuracy and reliability of these systems have been proven to be high. The search for excellence in machine scoring of essays is continuing and numerous studies are being conducted to improve the effectiveness of the AES systems.

  4. Automated endoscope reprocessors.

    Science.gov (United States)

    Desilets, David; Kaul, Vivek; Tierney, William M; Banerjee, Subhas; Diehl, David L; Farraye, Francis A; Kethu, Sripathi R; Kwon, Richard S; Mamula, Petar; Pedrosa, Marcos C; Rodriguez, Sarah A; Wong Kee Song, Louis-Michel

    2010-10-01

    The ASGE Technology Committee provides reviews of existing, new, or emerging endoscopic technologies that have an impact on the practice of GI endoscopy. Evidence-based methodology is used, with a MEDLINE literature search to identify pertinent clinical studies on the topic and a MAUDE (U.S. Food and Drug Administration Center for Devices and Radiological Health) database search to identify the reported complications of a given technology. Both are supplemented by accessing the "related articles" feature of PubMed and by scrutinizing pertinent references cited by the identified studies. Controlled clinical trials are emphasized, but in many cases data from randomized, controlled trials are lacking. In such cases, large case series, preliminary clinical studies, and expert opinions are used. Technical data are gathered from traditional and Web-based publications, proprietary publications, and informal communications with pertinent vendors. Technology Status Evaluation Reports are drafted by 1 or 2 members of the ASGE Technology Committee, reviewed and edited by the committee as a whole, and approved by the Governing Board of the ASGE. When financial guidance is indicated, the most recent coding data and list prices at the time of publication are provided. For this review, the MEDLINE database was searched through February 2010 for articles related to automated endoscope reprocessors, using the words endoscope reprocessing, endoscope cleaning, automated endoscope reprocessors, and high-level disinfection. Technology Status Evaluation Reports are scientific reviews provided solely for educational and informational purposes. Technology Status Evaluation Reports are not rules and should not be construed as establishing a legal standard of care or as encouraging, advocating, requiring, or discouraging any particular treatment or payment for such treatment. Copyright © 2010 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  5. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  6. Practical Parallel Divide-and-Conquer Algorithms

    National Research Council Canada - National Science Library

    Hardwick, Jonathan

    1997-01-01

    .... This thesis shows that by restricting the problem set to that of data-parallel divide and conquer algorithms I can maintain the expressibility of full nested data-parallel languages while achieving...

  7. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  8. Tools for the Automation of Large Distributed Control Systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit - SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting is real-time to changes in the system, thus providing for the automation of standard procedures and for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  9. Tools for the automation of large control systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit – SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting in real-time to changes in the system, thus providing for the automation of standard procedures and the for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  10. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  11. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  12. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  13. Parallel BLAST on split databases.

    Science.gov (United States)

    Mathog, David R

    2003-09-22

    BLAST programs often run on large SMP machines where multiple threads can work simultaneously and there is enough memory to cache the databases between program runs. A group of programs is described which allows comparable performance to be achieved with a Beowulf configuration in which no node has enough memory to cache a database but the cluster as an aggregate does. To achieve this result, databases are split into equal sized pieces and stored locally on each node. Each query is run on all nodes in parallel and the resultant BLAST output files from all nodes merged to yield the final output. Source code is available from ftp://saf.bio.caltech.edu/

  14. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  15. Parallel Compilation of CMS Software

    CERN Document Server

    Ashby, Shaun; Schmid, Stefan; Tuura, Lassi A

    2005-01-01

    LHC experiments have large amounts of software to build. CMS has studied ways to shorten project build times using parallel and distributed builds as well as improved ways to decide what to rebuild. We have experimented with making idle desktop and server machines easily available as a virtual build cluster using distcc and zeroconf. We have also tested variations of ccache and more traditional make dependency analysis. We report on our test results, with analysis of the factors that most improve or limit build performance.

  16. Parallel computation of rotating flows

    DEFF Research Database (Denmark)

    Lundin, Lars Kristian; Barker, Vincent A.; Sørensen, Jens Nørkær

    1999-01-01

    This paper deals with the simulation of 3‐D rotating flows based on the velocity‐vorticity formulation of the Navier‐Stokes equations in cylindrical coordinates. The governing equations are discretized by a finite difference method. The solution is advanced to a new time level by a two‐step process...... is that of solving a singular, large, sparse, over‐determined linear system of equations, and the iterative method CGLS is applied for this purpose. We discuss some of the mathematical and numerical aspects of this procedure and report on the performance of our software on a wide range of parallel computers. Darbe...

  17. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Aman, J.K.

    1986-01-01

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 10 9 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author) [pt

  18. Parallel Processing at the High School Level.

    Science.gov (United States)

    Sheary, Kathryn Anne

    This study investigated the ability of high school students to cognitively understand and implement parallel processing. Data indicates that most parallel processing is being taught at the university level. Instructional modules on C, Linux, and the parallel processing language, P4, were designed to show that high school students are highly…

  19. A parallel gravitational N-body kernel

    NARCIS (Netherlands)

    Portegies Zwart, S.; McMillan, S.; Groen, D.; Gualandris, A.; Sipior, M.; Vermin, W.

    2008-01-01

    We describe source code level parallelization for the kira direct gravitational N-body integrator, the workhorse of the starlab production environment for simulating dense stellar systems. The parallelization strategy, called "j-parallelization", involves the partition of the computational domain by

  20. Comparison of Parallel Viscosity with Neoclassical Theory

    OpenAIRE

    K., Ida; N., Nakajima

    1996-01-01

    Toroidal rotation profiles are measured with charge exchange spectroscopy for the plasma heated with tangential NBI in CHS heliotron/torsatron device to estimate parallel viscosity. The parallel viscosity derived from the toroidal rotation velocity shows good agreement with the neoclassical parallel viscosity plus the perpendicular viscosity. (mu_perp =2m^2 /s).

  1. Identifying, Quantifying, Extracting and Enhancing Implicit Parallelism

    Science.gov (United States)

    Agarwal, Mayank

    2009-01-01

    The shift of the microprocessor industry towards multicore architectures has placed a huge burden on the programmers by requiring explicit parallelization for performance. Implicit Parallelization is an alternative that could ease the burden on programmers by parallelizing applications "under the covers" while maintaining sequential semantics…

  2. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  3. Data-parallel DNS of turbulent flow

    NARCIS (Netherlands)

    Verstappen, R.W.C.P.; Veldman, A.E.P.; Emerson, DR; Ecer, A; Periaux, J; Satofuka, N

    1998-01-01

    This contribution deals with direct numerical simulation (DNS) of incompressible turbulent flows on parallel computers. We make use of the data-parallel model on shared memory systems as well as on a distributed memory machine. The combination of fast parallel computers and efficient numerical

  4. Cloud identification using genetic algorithms and massively parallel computation

    Science.gov (United States)

    Buckles, Bill P.; Petry, Frederick E.

    1996-01-01

    As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user

  5. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  6. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  7. Strategy, culture and innovation performance

    OpenAIRE

    do Nascimento Gambi, Lillian; Boer, Harry

    2015-01-01

    Firms strive for improving their performance, and organizational culture has beenrecognized as an important driver of better performance. In parallel, strategy isviewed as an important contextual variable that influences organizational cultureas well as performance. This study has two main goals: (1) investigating therelationship between strategic practices and innovation performance, and (2)determining if strategy has a direct and/or an indirect, culture-mediated effect oninnovation performa...

  8. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  9. Implementation and performance of parallelized elegant

    International Nuclear Information System (INIS)

    Wang, Y.; Borland, M.

    2008-01-01

    The program elegant is widely used for design and modeling of linacs for free-electron lasers and energy recovery linacs, as well as storage rings and other applications. As part of a multi-year effort, we have parallelized many aspects of the code, including single-particle dynamics, wakefields, and coherent synchrotron radiation. We report on the approach used for gradual parallelization, which proved very beneficial in getting parallel features into the hands of users quickly. We also report details of parallelization of collective effects. Finally, we discuss performance of the parallelized code in various applications.

  10. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment.

    Science.gov (United States)

    Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che

    2014-01-16

    To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high

  11. Designing a parallel evolutionary algorithm for inferring gene networks on the cloud computing environment

    Science.gov (United States)

    2014-01-01

    Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel

  12. An Automation Survival Guide for Media Centers.

    Science.gov (United States)

    Whaley, Roger E.

    1989-01-01

    Reviews factors that should affect the decision to automate a school media center and offers suggestions for the automation process. Topics discussed include getting the library collection ready for automation, deciding what automated functions are needed, evaluating software vendors, selecting software, and budgeting. (CLB)

  13. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2017-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed

  14. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  15. National Automated Conformity Inspection Process -

    Data.gov (United States)

    Department of Transportation — The National Automated Conformity Inspection Process (NACIP) Application is intended to expedite the workflow process as it pertains to the FAA Form 81 0-10 Request...

  16. Home automation with Intel Galileo

    CERN Document Server

    Dundar, Onur

    2015-01-01

    This book is for anyone who wants to learn Intel Galileo for home automation and cross-platform software development. No knowledge of programming with Intel Galileo is assumed, but knowledge of the C programming language is essential.

  17. Office Automation Boosts University's Productivity.

    Science.gov (United States)

    School Business Affairs, 1986

    1986-01-01

    The University of Pittsburgh has a 2-year agreement designating the Xerox Corporation as the primary supplier of word processing and related office automation equipment in order to increase productivity and more efficient use of campus resources. (MLF)

  18. Office Automation at Memphis State.

    Science.gov (United States)

    Smith, R. Eugene; And Others

    1986-01-01

    The development of a university-wide office automation plan, beginning with a short-range pilot project and a five-year plan for the entire organization with the potential for modular implementation, is described. (MSE)

  19. The Evaluation of Automated Systems

    National Research Council Canada - National Science Library

    McDougall, Jeffrey

    2004-01-01

    .... The Army has recognized this change and is adapting to operate in this new environment. It has developed a number of automated tools to assist leaders in the command and control of their organizations...

  20. Automation and Human Resource Management.

    Science.gov (United States)

    Taft, Michael

    1988-01-01

    Discussion of the automation of personnel administration in libraries covers (1) new developments in human resource management systems; (2) system requirements; (3) software evaluation; (4) vendor evaluation; (5) selection of a system; (6) training and support; and (7) benefits. (MES)

  1. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    Science.gov (United States)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  2. Anesthesiology, automation, and artificial intelligence.

    Science.gov (United States)

    Alexander, John C; Joshi, Girish P

    2018-01-01

    There have been many attempts to incorporate automation into the practice of anesthesiology, though none have been successful. Fundamentally, these failures are due to the underlying complexity of anesthesia practice and the inability of rule-based feedback loops to fully master it. Recent innovations in artificial intelligence, especially machine learning, may usher in a new era of automation across many industries, including anesthesiology. It would be wise to consider the implications of such potential changes before they have been fully realized.

  3. Aprendizaje automático

    OpenAIRE

    Suárez Cueto, Armando

    2006-01-01

    En este libro se introducen los conceptos básicos en una de las ramas más estudiadas actualmente dentro de la inteligencia artificial: el aprendizaje automático. Se estudian temas como el aprendizaje inductivo, el razonamiento analógico, el aprendizaje basado en explicaciones, las redes neuronales, los algoritmos genéticos, el razonamiento basado en casos o las aproximaciones teóricas al aprendizaje automático.

  4. 2015 Chinese Intelligent Automation Conference

    CERN Document Server

    Li, Hongbo

    2015-01-01

    Proceedings of the 2015 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’15, held in Fuzhou, China. The topics include adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, reconfigurable control, etc. Engineers and researchers from academia, industry and the government can gain valuable insights into interdisciplinary solutions in the field of intelligent automation.

  5. Technology modernization assessment flexible automation

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, D.W.; Boyd, D.R.; Hansen, N.H.; Hansen, M.A.; Yount, J.A.

    1990-12-01

    The objectives of this report are: to present technology assessment guidelines to be considered in conjunction with defense regulations before an automation project is developed to give examples showing how assessment guidelines may be applied to a current project to present several potential areas where automation might be applied successfully in the depot system. Depots perform primarily repair and remanufacturing operations, with limited small batch manufacturing runs. While certain activities (such as Management Information Systems and warehousing) are directly applicable to either environment, the majority of applications will require combining existing and emerging technologies in different ways, with the special needs of depot remanufacturing environment. Industry generally enjoys the ability to make revisions to its product lines seasonally, followed by batch runs of thousands or more. Depot batch runs are in the tens, at best the hundreds, of parts with a potential for large variation in product mix; reconfiguration may be required on a week-to-week basis. This need for a higher degree of flexibility suggests a higher level of operator interaction, and, in turn, control systems that go beyond the state of the art for less flexible automation and industry in general. This report investigates the benefits and barriers to automation and concludes that, while significant benefits do exist for automation, depots must be prepared to carefully investigate the technical feasibility of each opportunity and the life-cycle costs associated with implementation. Implementation is suggested in two ways: (1) develop an implementation plan for automation technologies based on results of small demonstration automation projects; (2) use phased implementation for both these and later stage automation projects to allow major technical and administrative risk issues to be addressed. 10 refs., 2 figs., 2 tabs. (JF)

  6. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  7. Towards automated traceability maintenance.

    Science.gov (United States)

    Mäder, Patrick; Gotel, Orlena

    2012-10-01

    Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided.

  8. Automated personnel radiation monitor

    International Nuclear Information System (INIS)

    Sterling, S.G.

    1981-01-01

    An automated Personnel Low-Level Radiation Portal Monitor has been developed by UNC Nuclear Industries, Inc. It is micro-computer controlled and uses nineteen large gas flow radiation detectors. By employing a micro-computer, sophisticated mathematical analysis is used on the detector informational data base to determine the statistical probability of contamination. This system provides for: (1) Increased sensitivity to point source contamination; (2) Real time background level compensation before and during Portal occupancy; (3) Variable counting periods as necessary to provide a significant statistical probability of contamination; (4) Continuous self-testing of system components, detector operability and sensitivity; and (5) Multiple modes of operation allowing the operator/owner control from continuous walk-through (for SNM detection at gates) to complete whole body counts (at step-off points from radiation zones). Sr-90 sources of .005 uCi can be detected from the hands and feet with a 90% confidence level, less than .1% false alarm rate with background levels up to 0.1 mR/hr. For the occupants periphery adjacent to the detectors, a sensitivity of .01 uCi is readily attainable. Alpha particle detection is legitimately available on hands, due to close proximity detection and thin Mylar detector cover techniques

  9. Automated personnel radiation monitor

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, S.G.

    1981-06-01

    An automated Personnel Low-Level Radiation Portal Monitor has been developed by UNC Nuclear Industries, Inc. It is micro-computer controlled and uses nineteen large gas flow radiation detectors. By employing a micro-computer, sophisticated mathematical analysis is used on the detector informational data base to determine the statistical probability of contamination. This system provides for: (1) Increased sensitivity to point source contamination; (2) Real time background level compensation before and during Portal occupancy; (3) Variable counting periods as necessary to provide a significant statistical probability of contamination; (4) Continuous self-testing of system components, detector operability and sensitivity; and (5) Multiple modes of operation allowing the operator/owner control from continuous walk-through (for SNM detection at gates) to complete whole body counts (at step-off points from radiation zones). Sr-90 sources of .005 uCi can be detected from the hands and feet with a 90% confidence level, less than .1% false alarm rate with background levels up to 0.1 mR/hr. For the occupants periphery adjacent to the detectors, a sensitivity of .01 uCi is readily attainable. Alpha particle detection is legitimately available on hands, due to close proximity detection and thin Mylar detector cover techniques.

  10. Particle Accelerator Focus Automation

    Science.gov (United States)

    Lopes, José; Rocha, Jorge; Redondo, Luís; Cruz, João

    2017-08-01

    The Laboratório de Aceleradores e Tecnologias de Radiação (LATR) at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST) has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+) and proton (H+) beams of some μA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.

  11. Automated propellant leak detection

    Science.gov (United States)

    Makel, D. B.; Jansa, E. D.; Bickmore, T. W.; Powers, W. T.

    1993-01-01

    An automated hydrogen leak detection system is being developed for earth-to-orbit rocket engine applications. The system consists of three elements, a sensor array, a signal processing unit, and a diagnostic processor. The sensor array consists of discrete solid state sensors which are located at specific potential leak sites and in potential leak zones. The signal processing unit provides excitations power for the sensors and provides analog to digital data conversion of the sensor signals. The diagnostic computer analyzes the sensor outputs to determine leak sources and magnitude. Leak data from the sensor network is interpreted using knowledge based software and displayed on-line through a graphical user interface including 3-D leak visualization. The system requirements have been developed assuming eventual application to the Space Shuttle Main Engine which requires approximately 72 measurement locations. A prototype system has been constructed to demonstrate the operational features of the system. This system includes both prototype electronics and data processing software algorithms. Experiments are in progress to evaluate system operation at conditions which simulate prelaunch and flight. The prototype system consists of a network using 16 sensors within a testbed which simulates engine components. Sensor response, orientation, and data analysis algorithms are being evaluated using calibrated leaks produced within the testbed. A prototype flight system is also under development consisting of 8 sensors and flight capable electronics with autonomous control and data recording.

  12. Particle Accelerator Focus Automation

    Directory of Open Access Journals (Sweden)

    Lopes José

    2017-08-01

    Full Text Available The Laboratório de Aceleradores e Tecnologias de Radiação (LATR at the Campus Tecnológico e Nuclear, of Instituto Superior Técnico (IST has a horizontal electrostatic particle accelerator based on the Van de Graaff machine which is used for research in the area of material characterization. This machine produces alfa (He+ and proton (H+ beams of some μA currents up to 2 MeV/q energies. Beam focusing is obtained using a cylindrical lens of the Einzel type, assembled near the high voltage terminal. This paper describes the developed system that automatically focuses the ion beam, using a personal computer running the LabVIEW software, a multifunction input/output board and signal conditioning circuits. The focusing procedure consists of a scanning method to find the lens bias voltage which maximizes the beam current measured on a beam stopper target, which is used as feedback for the scanning cycle. This system, as part of a wider start up and shut down automation system built for this particle accelerator, brings great advantages to the operation of the accelerator by turning it faster and easier to operate, requiring less human presence, and adding the possibility of total remote control in safe conditions.

  13. Automated uranium titration system

    International Nuclear Information System (INIS)

    Takahashi, M.; Kato, Y.

    1983-01-01

    An automated titration system based on the Davies-Gray method has been developed for accurate determination of uranium. The system consists of a potentiometric titrator with precise burettes, a sample changer, an electronic balance and a desk-top computer with a printer. Fifty-five titration vessels are loaded in the sample changer. The first three contain the standard solution for standardizing potassium dichromate titrant, and the next two and the last two contain the control samples for data quality assurance. The other forty-eight measurements are carried out for sixteen unknown samples. Sample solution containing about 100 mg uranium is taken in a titration vessel. At the pretreatment position, uranium (VI) is reduced to uranium (IV) by iron (II). After the valency adjustment, the vessel is transferred to the titration position. The rate of titrant addition is automatically controlled to be slower near the end-point. The last figure (0.01 mL) of the equivalent titrant volume for uranium is calculated from the potential change. The results obtained with this system on 100 mg uranium gave a precision of 0.2% (RSD,n=3) and an accuracy of better than 0.1%. Fifty-five titrations are accomplished in 10 hours. (author)

  14. Automated asteroseismic peak detections

    Science.gov (United States)

    García Saravia Ortiz de Montellano, Andrés; Hekker, S.; Themeßl, N.

    2018-05-01

    Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible in a power density spectrum. Identification of oscillation modes is usually done by visual inspection that is time-consuming and has a degree of subjectivity. Here, we present a peak-detection algorithm especially suited for the detection of solar-like oscillations. It reliably characterizes the solar-like oscillations in a power density spectrum and estimates their parameters without human intervention. Furthermore, we provide a metric to characterize the false positive and false negative rates to provide further information about the reliability of a detected oscillation mode or the significance of a lack of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler.

  15. Automated ISS Flight Utilities

    Science.gov (United States)

    Offermann, Jan Tuzlic

    2016-01-01

    During my internship at NASA Johnson Space Center, I worked in the Space Radiation Analysis Group (SRAG), where I was tasked with a number of projects focused on the automation of tasks and activities related to the operation of the International Space Station (ISS). As I worked on a number of projects, I have written short sections below to give a description for each, followed by more general remarks on the internship experience. My first project is titled "General Exposure Representation EVADOSE", also known as "GEnEVADOSE". This project involved the design and development of a C++/ ROOT framework focused on radiation exposure for extravehicular activity (EVA) planning for the ISS. The utility helps mission managers plan EVAs by displaying information on the cumulative radiation doses that crew will receive during an EVA as a function of the egress time and duration of the activity. SRAG uses a utility called EVADOSE, employing a model of the space radiation environment in low Earth orbit to predict these doses, as while outside the ISS the astronauts will have less shielding from charged particles such as electrons and protons. However, EVADOSE output is cumbersome to work with, and prior to GEnEVADOSE, querying data and producing graphs of ISS trajectories and cumulative doses versus egress time required manual work in Microsoft Excel. GEnEVADOSE automates all this work, reading in EVADOSE output file(s) along with a plaintext file input by the user providing input parameters. GEnEVADOSE will output a text file containing all the necessary dosimetry for each proposed EVA egress time, for each specified EVADOSE file. It also plots cumulative dose versus egress time and the ISS trajectory, and displays all of this information in an auto-generated presentation made in LaTeX. New features have also been added, such as best-case scenarios (egress times corresponding to the least dose), interpolated curves for trajectories, and the ability to query any time in the

  16. Automated Supernova Discovery (Abstract)

    Science.gov (United States)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  17. Parallelisms and revelatory concepts of the Johannine Prologue in Greco-Roman context

    Directory of Open Access Journals (Sweden)

    Benno Zuiddam

    2016-05-01

    Full Text Available This article builds on the increasing recognition of divine communication and God’s plan as a central concept in the prologue to the Fourth gospel. A philological analysis reveals parallel structures with an emphasis on divine communication in which the Logos takes a central part. These should be understood within the context of this gospel, but have their roots in the Old Testament. The Septuagint offers parallel concepts, particularly in its wisdom literature. Apart from these derivative parallels, the revelatory concepts and terminology involved in John 1:1–18, also find functional parallels in the historical environment of the fourth gospel. They share similarities with the role of Apollo Phoebus in the traditionally assigned geographical context of the region of Ephesus in Asia Minor. This functional parallelism served the reception of John’s biblical message in a Greco-Roman cultural setting. Keywords: John's Gospel; Apollo Phoebus; Logos; Revelation; Ephesus

  18. Device for balancing parallel strings

    Science.gov (United States)

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  19. Massively parallel diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V.; Pitts, Todd A.

    2017-09-05

    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  20. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  1. An Office Automation Needs Assessment Model

    Science.gov (United States)

    1985-08-01

    office automation needs of a Army Hospital. Based on a literature review and interviews with industry experts, a model was developed to assess office automation needs. The model was applied against the needs of the Clinical Support Division. The author identified a need for a strategic plan for Office Automation prior to analysis of a specific service for automaton. He recommended establishment of a Hospital Automation Advisory Council to centralize establish policy recommendations for Office automation

  2. FULLY AUTOMATED IMAGE ORIENTATION IN THE ABSENCE OF TARGETS

    Directory of Open Access Journals (Sweden)

    C. Stamatopoulos

    2012-07-01

    Full Text Available Automated close-range photogrammetric network orientation has traditionally been associated with the use of coded targets in the object space to allow for an initial relative orientation (RO and subsequent spatial resection of the images. Over the past decade, automated orientation via feature-based matching (FBM techniques has attracted renewed research attention in both the photogrammetry and computer vision (CV communities. This is largely due to advances made towards the goal of automated relative orientation of multi-image networks covering untargetted (markerless objects. There are now a number of CV-based algorithms, with accompanying open-source software, that can achieve multi-image orientation within narrow-baseline networks. From a photogrammetric standpoint, the results are typically disappointing as the metric integrity of the resulting models is generally poor, or even unknown, while the number of outliers within the image matching and triangulation is large, and generally too large to allow relative orientation (RO via the commonly used coplanarity equations. On the other hand, there are few examples within the photogrammetric research field of automated markerless camera calibration to metric tolerances, and these too are restricted to narrow-baseline, low-convergence imaging geometry. The objective addressed in this paper is markerless automatic multi-image orientation, maintaining metric integrity, within networks that incorporate wide-baseline imagery. By wide-baseline we imply convergent multi-image configurations with convergence angles of up to around 90°. An associated aim is provision of a fast, fully automated process, which can be performed without user intervention. For this purpose, various algorithms require optimisation to allow parallel processing utilising multiple PC cores and graphics processing units (GPUs.

  3. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Baofeng Li

    2009-01-01

    Full Text Available Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  4. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Li Baofeng

    2009-01-01

    Full Text Available Abstract Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  5. Integrated Task and Data Parallel Programming

    Science.gov (United States)

    Grimshaw, A. S.

    1998-01-01

    This research investigates the combination of task and data parallel language constructs within a single programming language. There are an number of applications that exhibit properties which would be well served by such an integrated language. Examples include global climate models, aircraft design problems, and multidisciplinary design optimization problems. Our approach incorporates data parallel language constructs into an existing, object oriented, task parallel language. The language will support creation and manipulation of parallel classes and objects of both types (task parallel and data parallel). Ultimately, the language will allow data parallel and task parallel classes to be used either as building blocks or managers of parallel objects of either type, thus allowing the development of single and multi-paradigm parallel applications. 1995 Research Accomplishments In February I presented a paper at Frontiers 1995 describing the design of the data parallel language subset. During the spring I wrote and defended my dissertation proposal. Since that time I have developed a runtime model for the language subset. I have begun implementing the model and hand-coding simple examples which demonstrate the language subset. I have identified an astrophysical fluid flow application which will validate the data parallel language subset. 1996 Research Agenda Milestones for the coming year include implementing a significant portion of the data parallel language subset over the Legion system. Using simple hand-coded methods, I plan to demonstrate (1) concurrent task and data parallel objects and (2) task parallel objects managing both task and data parallel objects. My next steps will focus on constructing a compiler and implementing the fluid flow application with the language. Concurrently, I will conduct a search for a real-world application exhibiting both task and data parallelism within the same program. Additional 1995 Activities During the fall I collaborated

  6. Urine culture

    Science.gov (United States)

    Culture and sensitivity - urine ... when urinating. You also may have a urine culture after you have been treated for an infection. ... when bacteria or yeast are found in the culture. This likely means that you have a urinary ...

  7. Safeguards Culture

    Energy Technology Data Exchange (ETDEWEB)

    Frazar, Sarah L.; Mladineo, Stephen V.

    2012-07-01

    The concepts of nuclear safety and security culture are well established; however, a common understanding of safeguards culture is not internationally recognized. Supported by the National Nuclear Security Administration, the authors prepared this report, an analysis of the concept of safeguards culture, and gauged its value to the safeguards community. The authors explored distinctions between safeguards culture, safeguards compliance, and safeguards performance, and evaluated synergies and differences between safeguards culture and safety/security culture. The report concludes with suggested next steps.

  8. 77 FR 48527 - National Customs Automation Program (NCAP) Test Concerning Automated Commercial Environment (ACE...

    Science.gov (United States)

    2012-08-14

    ... DEPARTMENT OF HOMELAND SECURITY U.S. Customs and Border Protection National Customs Automation... National Customs Automation Program (NCAP) test concerning the simplified entry functionality in the... INFORMATION: Background In General Customs and Border Protection's (CBP's) National Customs Automation Program...

  9. Automatic generation of executable communication specifications from parallel applications

    Energy Technology Data Exchange (ETDEWEB)

    Pakin, Scott [Los Alamos National Laboratory; Wu, Xing [NCSU; Mueller, Frank [NCSU

    2011-01-19

    Portable parallel benchmarks are widely used and highly effective for (a) the evaluation, analysis and procurement of high-performance computing (HPC) systems and (b) quantifying the potential benefits of porting applications for new hardware platforms. Yet, past techniques to synthetically parameterized hand-coded HPC benchmarks prove insufficient for today's rapidly-evolving scientific codes particularly when subject to multi-scale science modeling or when utilizing domain-specific libraries. To address these problems, this work contributes novel methods to automatically generate highly portable and customizable communication benchmarks from HPC applications. We utilize ScalaTrace, a lossless, yet scalable, parallel application tracing framework to collect selected aspects of the run-time behavior of HPC applications, including communication operations and execution time, while abstracting away the details of the computation proper. We subsequently generate benchmarks with identical run-time behavior from the collected traces. A unique feature of our approach is that we generate benchmarks in CONCEPTUAL, a domain-specific language that enables the expression of sophisticated communication patterns using a rich and easily understandable grammar yet compiles to ordinary C + MPI. Experimental results demonstrate that the generated benchmarks are able to preserve the run-time behavior - including both the communication pattern and the execution time - of the original applications. Such automated benchmark generation is particularly valuable for proprietary, export-controlled, or classified application codes: when supplied to a third party. Our auto-generated benchmarks ensure performance fidelity but without the risks associated with releasing the original code. This ability to automatically generate performance-accurate benchmarks from parallel applications is novel and without any precedence, to our knowledge.

  10. Organizational Culture

    Directory of Open Access Journals (Sweden)

    Adrian HUDREA

    2006-02-01

    Full Text Available Cultural orientations of an organization can be its greatest strength, providing the basis for problem solving, cooperation, and communication. Culture, however, can also inhibit needed changes. Cultural changes typically happen slowly – but without cultural change, many other organizational changes are doomed to fail. The dominant culture of an organization is a major contributor to its success. But, of course, no organizational culture is purely one type or another. And the existence of secondary cultures can provide the basis for change. Therefore, organizations need to understand the cultural environments and values.

  11. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  12. Automated ship image acquisition

    Science.gov (United States)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  13. STAMPS: software tool for automated MRI post-processing on a supercomputer

    OpenAIRE

    Bigler, Don C.; Aksu, Yaman; Yang, Qing X.

    2009-01-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features....

  14. Cultural evolutionary theory: How culture evolves and why it matters.

    Science.gov (United States)

    Creanza, Nicole; Kolodny, Oren; Feldman, Marcus W

    2017-07-24

    Human cultural traits-behaviors, ideas, and technologies that can be learned from other individuals-can exhibit complex patterns of transmission and evolution, and researchers have developed theoretical models, both verbal and mathematical, to facilitate our understanding of these patterns. Many of the first quantitative models of cultural evolution were modified from existing concepts in theoretical population genetics because cultural evolution has many parallels with, as well as clear differences from, genetic evolution. Furthermore, cultural and genetic evolution can interact with one another and influence both transmission and selection. This interaction requires theoretical treatments of gene-culture coevolution and dual inheritance, in addition to purely cultural evolution. In addition, cultural evolutionary theory is a natural component of studies in demography, human ecology, and many other disciplines. Here, we review the core concepts in cultural evolutionary theory as they pertain to the extension of biology through culture, focusing on cultural evolutionary applications in population genetics, ecology, and demography. For each of these disciplines, we review the theoretical literature and highlight relevant empirical studies. We also discuss the societal implications of the study of cultural evolution and of the interactions of humans with one another and with their environment.

  15. Mainstreaming culture in psychology.

    Science.gov (United States)

    Cheung, Fanny M

    2012-11-01

    Despite the "awakening" to the importance of culture in psychology in America, international psychology has remained on the sidelines of psychological science. The author recounts her personal and professional experience in tandem with the stages of development in international/cross-cultural psychology. Based on her research in cross-cultural personality assessment, the author discusses the inadequacies of sole reliance on either the etic or the emic approach and points out the advantages of a combined emic-etic approach in bridging global and local human experiences in psychological science and practice. With the blurring of the boundaries between North American-European psychologies and psychology in the rest of the world, there is a need to mainstream culture in psychology's epistemological paradigm. Borrowing from the concept of gender mainstreaming that embraces both similarities and differences in promoting equal opportunities, the author discusses the parallel needs of acknowledging universals and specifics when mainstreaming culture in psychology. She calls for building a culturally informed universal knowledge base that should be incorporated in the psychology curriculum and textbooks. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  16. Automating Microbial Directed Evolution For Bioengineering Applications

    Science.gov (United States)

    Lee, A.; Demachkie, I. S.; Sardesh, N.; Arismendi, D.; Ouandji, C.; Wang, J.; Blaich, J.; Gentry, D.

    2016-12-01

    From a micro-biology perspective, directed evolution is a technique that uses controlled environmental pressures to select for a desired phenotype. Directed evolution has the distinct advantage over rational design of not needing extensive knowledge of the genome or pathways associated with a microorganism to induce phenotypes. However, there are currently limitations to the applicability of this technique including being time-consuming, error-prone, and dependent on existing assays that may lack selectivity for the given phenotype. The AADEC (Autonomous Adaptive Directed Evolution Chamber) system is a proof-of-concept instrument to automate and improve the technique such that directed evolution can be used more effectively as a general bioengineering tool. A series of tests using the automated system and comparable by-hand survival assay measurements have been carried out using UV-C radiation and Escherichia coli cultures in order to demonstrate the advantages of the AADEC versus traditional implementations of directed evolution such as random mutagenesis. AADEC uses UV-C exposure as both a source of environmental stress and mutagenesis, so in order to evaluate the UV-C tolerance obtained from the cultures, a manual UV-C exposure survival assay was developed alongside the device to compare the survival fractions at a fixed dosage. This survival assay involves exposing E.coli to UV-C radiation using a custom-designed exposure hood to control the flux and dose. Surviving cells are counted then transferred to the next iteration and so on for several iterations to calculate the survival fractions for each exposure iteration. This survival assay primarily serves as a baseline for the AADEC device, allowing quantification of the differences between the AADEC system over the manual approach. The primary data of comparison is survival fractions; this is obtained by optical density and plate counts in the manual assay and by optical density growth curve fits pre- and post

  17. The Same-Source Parallel MM5

    Directory of Open Access Journals (Sweden)

    John Michalakes

    2000-01-01

    Full Text Available Beginning with the March 1998 release of the Penn State University/NCAR Mesoscale Model (MM5, and continuing through eight subsequent releases up to the present, the official version has run on distributed -memory (DM parallel computers. Source translation and runtime library support minimize the impact of parallelization on the original model source code, with the result that the majority of code is line-for-line identical with the original version. Parallel performance and scaling are equivalent to earlier, hand-parallelized versions; the modifications have no effect when the code is compiled and run without the DM option. Supported computers include the IBM SP, Cray T3E, Fujitsu VPP, Compaq Alpha clusters, and clusters of PCs (so-called Beowulf clusters. The approach also is compatible with shared-memory parallel directives, allowing distributed-memory/shared-memory hybrid parallelization on distributed-memory clusters of symmetric multiprocessors.

  18. Parallel processing for fluid dynamics applications

    International Nuclear Information System (INIS)

    Johnson, G.M.

    1989-01-01

    The impact of parallel processing on computational science and, in particular, on computational fluid dynamics is growing rapidly. In this paper, particular emphasis is given to developments which have occurred within the past two years. Parallel processing is defined and the reasons for its importance in high-performance computing are reviewed. Parallel computer architectures are classified according to the number and power of their processing units, their memory, and the nature of their connection scheme. Architectures which show promise for fluid dynamics applications are emphasized. Fluid dynamics problems are examined for parallelism inherent at the physical level. CFD algorithms and their mappings onto parallel architectures are discussed. Several example are presented to document the performance of fluid dynamics applications on present-generation parallel processing devices

  19. Design considerations for parallel graphics libraries

    Science.gov (United States)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  20. Runtime volume visualization for parallel CFD

    Science.gov (United States)

    Ma, Kwan-Liu

    1995-01-01

    This paper discusses some aspects of design of a data distributed, massively parallel volume rendering library for runtime visualization of parallel computational fluid dynamics simulations in a message-passing environment. Unlike the traditional scheme in which visualization is a postprocessing step, the rendering is done in place on each node processor. Computational scientists who run large-scale simulations on a massively parallel computer can thus perform interactive monitoring of their simulations. The current library provides an interface to handle volume data on rectilinear grids. The same design principles can be generalized to handle other types of grids. For demonstration, we run a parallel Navier-Stokes solver making use of this rendering library on the Intel Paragon XP/S. The interactive visual response achieved is found to be very useful. Performance studies show that the parallel rendering process is scalable with the size of the simulation as well as with the parallel computer.

  1. Parallelism, deep homology, and evo-devo.

    Science.gov (United States)

    Hall, Brian K

    2012-01-01

    Parallelism has been the subject of a number of recent studies that have resulted in reassessment of the term and the process. Parallelism has been aligned with homology leaving convergence as the only case of homoplasy, regarded as a transition between homologous and convergent characters, and defined as the independent evolution of genetic traits. Another study advocates abolishing the term parallelism and treating all cases of the independent evolution of characters as convergence. With the sophistication of modern genomics and genetic analysis, parallelism of characters of the phenotype is being discovered to reflect parallel genetic evolution. Approaching parallelism from developmental and genetic perspectives enables us to tease out the degree to which the reuse of pathways represent deep homology and is a major task for evolutionary developmental biology in the coming decades. © 2012 Wiley Periodicals, Inc.

  2. Parallel processing from applications to systems

    CERN Document Server

    Moldovan, Dan I

    1993-01-01

    This text provides one of the broadest presentations of parallelprocessing available, including the structure of parallelprocessors and parallel algorithms. The emphasis is on mappingalgorithms to highly parallel computers, with extensive coverage ofarray and multiprocessor architectures. Early chapters provideinsightful coverage on the analysis of parallel algorithms andprogram transformations, effectively integrating a variety ofmaterial previously scattered throughout the literature. Theory andpractice are well balanced across diverse topics in this concisepresentation. For exceptional cla

  3. Kinematics Analysis of Two Parallel Locomotion Mechanisms

    Science.gov (United States)

    2010-08-27

    1998, "Design Considerations of New Six Degrees-of-Freedom Parallel Robots," Leuven, Belgium, 2, pp. 1327-1333. [19] Angeles, J., Guilin , Y., and...Parallel Manipulators: Identification and Elimination," Minneapolis, MN, USA, 72, pp. 459-466. [25] Dash, A. K., Chen, I. M., Song Huat, Y., and Guilin ...28] Guilin , Y., Chen, I. M., Wei, L., and Angeles, J., 2001, "Singularity Analysis of Three-Legged Parallel Robots Based on Passive-Joint Velocities

  4. A survey of parallel multigrid algorithms

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1987-01-01

    A typical multigrid algorithm applied to well-behaved linear-elliptic partial-differential equations (PDEs) is described. Criteria for designing and evaluating parallel algorithms are presented. Before evaluating the performance of some parallel multigrid algorithms, consideration is given to some theoretical complexity results for solving PDEs in parallel and for executing the multigrid algorithm. The effect of mapping and load imbalance on the partial efficiency of the algorithm is studied.

  5. Parallel processing for artificial intelligence 1

    CERN Document Server

    Kanal, LN; Kumar, V; Suttner, CB

    1994-01-01

    Parallel processing for AI problems is of great current interest because of its potential for alleviating the computational demands of AI procedures. The articles in this book consider parallel processing for problems in several areas of artificial intelligence: image processing, knowledge representation in semantic networks, production rules, mechanization of logic, constraint satisfaction, parsing of natural language, data filtering and data mining. The publication is divided into six sections. The first addresses parallel computing for processing and understanding images. The second discus

  6. On-the-fly pipeline parallelism

    OpenAIRE

    Lee, I-Ting Angelina; Leiserson, Charles E.; Sukha, Jim; Zhang, Zhunping; Schardl, Tao Benjamin

    2013-01-01

    Pipeline parallelism organizes a parallel program as a linear sequence of s stages. Each stage processes elements of a data stream, passing each processed data element to the next stage, and then taking on a new element before the subsequent stages have necessarily completed their processing. Pipeline parallelism is used especially in streaming applications that perform video, audio, and digital signal processing. Three out of 13 benchmarks in PARSEC, a popular software benchmark suite design...

  7. Using light to shape chemical gradients for parallel and automated analysis of chemotaxis.

    Science.gov (United States)

    Collins, Sean R; Yang, Hee Won; Bonger, Kimberly M; Guignet, Emmanuel G; Wandless, Thomas J; Meyer, Tobias

    2015-04-23

    Numerous molecular components have been identified that regulate the directed migration of eukaryotic cells toward sources of chemoattractant. However, how the components of this system are wired together to coordinate multiple aspects of the response, such as directionality, speed, and sensitivity to stimulus, remains poorly understood. Here we developed a method to shape chemoattractant gradients optically and analyze cellular chemotaxis responses of hundreds of living cells per well in 96-well format by measuring speed changes and directional accuracy. We then systematically characterized migration and chemotaxis phenotypes for 285 siRNA perturbations. A key finding was that the G-protein Giα subunit selectively controls the direction of migration while the receptor and Gβ subunit proportionally control both speed and direction. Furthermore, we demonstrate that neutrophils chemotax persistently in response to gradients of fMLF but only transiently in response to gradients of ATP. The method we introduce is applicable for diverse chemical cues and systematic perturbations, can be used to measure multiple cell migration and signaling parameters, and is compatible with low- and high-resolution fluorescence microscopy. © 2015 The Authors. Published under the terms of the CC BY 4.0 license.

  8. Automated Parallel Computing Tools for Multicore Machines and Clusters, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to improve productivity of high performance computing for applications on multicore computers and clusters. These machines built from one or more chips...

  9. Vision feedback driven automated assembly of photopolymerized structures by parallel optical trapping and manipulation

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Perch-Nielsen, Ivan Ryberg; Rodrigo, Peter John

    2007-01-01

    We demonstrate how optical trapping and manipulation can be used to assemble microstructures. The microstructures we show being automatically recognized and manipulated are produced using the two-photon polymerization (2PP) technique with submicron resolution. In this work, we show identical shape...

  10. Parallel Genetic Algorithm for Alpha Spectra Fitting

    Science.gov (United States)

    García-Orellana, Carlos J.; Rubio-Montero, Pilar; González-Velasco, Horacio

    2005-01-01

    We present a performance study of alpha-particle spectra fitting using parallel Genetic Algorithm (GA). The method uses a two-step approach. In the first step we run parallel GA to find an initial solution for the second step, in which we use Levenberg-Marquardt (LM) method for a precise final fit. GA is a high resources-demanding method, so we use a Beowulf cluster for parallel simulation. The relationship between simulation time (and parallel efficiency) and processors number is studied using several alpha spectra, with the aim of obtaining a method to estimate the optimal processors number that must be used in a simulation.

  11. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  12. Parallel thermal radiation transport in two dimensions

    International Nuclear Information System (INIS)

    Smedley-Stevenson, R.P.; Ball, S.R.

    2003-01-01

    This paper describes the distributed memory parallel implementation of a deterministic thermal radiation transport algorithm in a 2-dimensional ALE hydrodynamics code. The parallel algorithm consists of a variety of components which are combined in order to produce a state of the art computational capability, capable of solving large thermal radiation transport problems using Blue-Oak, the 3 Tera-Flop MPP (massive parallel processors) computing facility at AWE (United Kingdom). Particular aspects of the parallel algorithm are described together with examples of the performance on some challenging applications. (author)

  13. Productive Parallel Programming: The PCN Approach

    Directory of Open Access Journals (Sweden)

    Ian Foster

    1992-01-01

    Full Text Available We describe the PCN programming system, focusing on those features designed to improve the productivity of scientists and engineers using parallel supercomputers. These features include a simple notation for the concise specification of concurrent algorithms, the ability to incorporate existing Fortran and C code into parallel applications, facilities for reusing parallel program components, a portable toolkit that allows applications to be developed on a workstation or small parallel computer and run unchanged on supercomputers, and integrated debugging and performance analysis tools. We survey representative scientific applications and identify problem classes for which PCN has proved particularly useful.

  14. High performance parallel I/O

    CERN Document Server

    Prabhat

    2014-01-01

    Gain Critical Insight into the Parallel I/O EcosystemParallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, High Performance Parallel I/O draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem.The first part of the book explains how large-scale HPC facilities scope, configure, and operate systems, with an emphasis on choices of I/O har

  15. Conformal pure radiation with parallel rays

    International Nuclear Information System (INIS)

    Leistner, Thomas; Paweł Nurowski

    2012-01-01

    We define pure radiation metrics with parallel rays to be n-dimensional pseudo-Riemannian metrics that admit a parallel null line bundle K and whose Ricci tensor vanishes on vectors that are orthogonal to K. We give necessary conditions in terms of the Weyl, Cotton and Bach tensors for a pseudo-Riemannian metric to be conformal to a pure radiation metric with parallel rays. Then, we derive conditions in terms of the tractor calculus that are equivalent to the existence of a pure radiation metric with parallel rays in a conformal class. We also give analogous results for n-dimensional pseudo-Riemannian pp-waves. (paper)

  16. Structured Parallel Programming Patterns for Efficient Computation

    CERN Document Server

    McCool, Michael; Robison, Arch

    2012-01-01

    Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of th

  17. Parallel QR Decomposition for Electromagnetic Scattering Problems

    National Research Council Canada - National Science Library

    Boleng, Jeff

    1997-01-01

    This report introduces a new parallel QR decomposition algorithm. Test results are presented for several problem sizes, numbers of processors, and data from the electromagnetic scattering problem domain...

  18. Parallel Simulation of Chip-Multiprocessor Architectures

    National Research Council Canada - National Science Library

    Chidester, Matthew C; George, Alan D

    2002-01-01

    Chip-multiprocessor (CMP) architectures present a challenge for efficient simulation, combining the requirements of a detailed microprocessor simulator with that of a tightly-coupled parallel system...

  19. Parallel Prediction of Stock Volatility

    Directory of Open Access Journals (Sweden)

    Priscilla Jenq

    2017-10-01

    Full Text Available Volatility is a measurement of the risk of financial products. A stock will hit new highs and lows over time and if these highs and lows fluctuate wildly, then it is considered a high volatile stock. Such a stock is considered riskier than a stock whose volatility is low. Although highly volatile stocks are riskier, the returns that they generate for investors can be quite high. Of course, with a riskier stock also comes the chance of losing money and yielding negative returns. In this project, we will use historic stock data to help us forecast volatility. Since the financial industry usually uses S&P 500 as the indicator of the market, we will use S&P 500 as a benchmark to compute the risk. We will also use artificial neural networks as a tool to predict volatilities for a specific time frame that will be set when we configure this neural network. There have been reports that neural networks with different numbers of layers and different numbers of hidden nodes may generate varying results. In fact, we may be able to find the best configuration of a neural network to compute volatilities. We will implement this system using the parallel approach. The system can be used as a tool for investors to allocating and hedging assets.

  20. A Soft Parallel Kinematic Mechanism.

    Science.gov (United States)

    White, Edward L; Case, Jennifer C; Kramer-Bottiglio, Rebecca

    2018-02-01

    In this article, we describe a novel holonomic soft robotic structure based on a parallel kinematic mechanism. The design is based on the Stewart platform, which uses six sensors and actuators to achieve full six-degree-of-freedom motion. Our design is much less complex than a traditional platform, since it replaces the 12 spherical and universal joints found in a traditional Stewart platform with a single highly deformable elastomer body and flexible actuators. This reduces the total number of parts in the system and simplifies the assembly process. Actuation is achieved through coiled-shape memory alloy actuators. State observation and feedback is accomplished through the use of capacitive elastomer strain gauges. The main structural element is an elastomer joint that provides antagonistic force. We report the response of the actuators and sensors individually, then report the response of the complete assembly. We show that the completed robotic system is able to achieve full position control, and we discuss the limitations associated with using responsive material actuators. We believe that control demonstrated on a single body in this work could be extended to chains of such bodies to create complex soft robots.