Sample records for automated parallel cultures

  1. Fully automated parallel oligonucleotide synthesizer

    Lebl, M.; Burger, Ch.; Ellman, B.; Heiner, D.; Ibrahim, G.; Jones, A.; Nibbe, M.; Thompson, J.; Mudra, Petr; Pokorný, Vít; Poncar, Pavel; Ženíšek, Karel


    Roč. 66, č. 8 (2001), s. 1299-1314. ISSN 0010-0765 Institutional research plan: CEZ:AV0Z4055905 Keywords : automated oligonucleotide synthesizer Subject RIV: CC - Organic Chemistry Impact factor: 0.778, year: 2001

  2. Parallel symbolic execution for automated real-world software testing

    Bucur, Stefan; Ureche, Vlad; Zamfir, Cristian; Candea, George


    This paper introduces Cloud9, a platform for automated testing of real-world software. Our main contribution is the scalable parallelization of symbolic execution on clusters of commodity hardware, to help cope with path explosion. Cloud9 provides a systematic interface for writing "symbolic tests" that concisely specify entire families of inputs and behaviors to be tested, thus improving testing productivity. Cloud9 can handle not only single-threaded programs but also multi-threaded and dis...

  3. Automated Enhanced Parallelization of Sequential C to Parallel OpenMP

    Dheeraj D., Shruti Ramesh, Nitish B.


    Full Text Available The paper presents the work towards implementation of a technique to enhance parallel execution of auto-generated OpenMP programs by considering the architecture of on-chip cache memory, thereby achieving higher performance. It avoids false-sharing in 'for-loops' by generating OpenMP code for dynamically scheduling chunks by placing each core’s data cache line size apart. It has been found that most of the parallelization tools do not deal with significant issues associated with multicore such as false-sharing, which can degrade performance. An open-source parallelization tool called Par4All (Parallel for All, which internally makes use of PIPS (Parallelization Infrastructure for Parallel Systems - PoCC (Polyhedral Compiler Collection integration has been analyzed and its power has been unleashed to achieve maximum hardware utilization. The work is focused only on optimizing parallelization of for-loops, since loops are the most time consuming parts of code. The performance of the generated OpenMP programs have been analyzed on different architectures using Intel® VTune™ Performance Analyzer. Some of the computationally intensive programs from PolyBench have been tested with different data sets and the results obtained reveal that the OpenMP codes generated by the enhanced technique have resulted in considerable speedup. The deliverables include automation tool, test cases, corresponding OpenMP programs and performance analysis reports.

  4. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris


    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  5. Automating the selection of standard parallels for conic map projections

    Šavriǒ, Bojan; Jenny, Bernhard


    Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.

  6. Automating the parallel processing of fluid and structural dynamics calculations

    Arpasi, Dale J.; Cole, Gary L.


    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilties to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  7. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)


    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  8. Automation in laser scanning for cultural heritage applications

    Böhm, Jan; Haala, Norbert; Alshawabkeh, Yahya


    Within the paper we present the current activities of the Institute for Photogrammetry in cultural heritage documentation in Jordan. In particular two sites, Petra and Jerash, were recorded using terrestrial laser scanning (TLS). We present the results and the current status of the recording. Experiences drawn from these projects have led us to investigate more automated approaches to TLS data processing. We detail two approaches within this work. The automation of georeferencing for TLS data...

  9. Saturated Feedback Control for an Automated Parallel Parking Assist System

    Petrov, Plamen; Nashashibi, Fawzi


    This paper considers the parallel parking problem of automatic front-wheel steering vehicles. The problem of stabilizing the vehicle at desired position and orientation is seen as an extension of the tracking problem. A saturated control is proposed which achieves quick steering of the system near the desired position of the parking spot with desired orientation and can be successfully used in solving parking problems. In addition, in order to obtain larger area of the starting positions of t...

  10. Automated harvesting and 2-step purification of unclarified mammalian cell-culture broths containing antibodies.

    Holenstein, Fabian; Eriksson, Christer; Erlandsson, Ioana; Norrman, Nils; Simon, Jill; Danielsson, Åke; Milicov, Adriana; Schindler, Patrick; Schlaeppi, Jean-Marc


    Therapeutic monoclonal antibodies represent one of the fastest growing segments in the pharmaceutical market. The growth of the segment has necessitated development of new efficient and cost saving platforms for the preparation and analysis of early candidates for faster and better antibody selection and characterization. We report on a new integrated platform for automated harvesting of whole unclarified cell-culture broths, followed by in-line tandem affinity-capture, pH neutralization and size-exclusion chromatography of recombinant antibodies expressed transiently in mammalian human embryonic kidney 293T-cells at the 1-L scale. The system consists of two bench-top chromatography instruments connected to a central unit with eight disposable filtration devices used for loading and filtering the cell cultures. The staggered parallel multi-step configuration of the system allows unattended processing of eight samples in less than 24h. The system was validated with a random panel of 45 whole-cell culture broths containing recombinant antibodies in the early profiling phase. The results showed that the overall performances of the preparative automated system were higher compared to the conventional downstream process including manual harvesting and purification. The mean recovery of purified material from the culture-broth was 66.7%, representing a 20% increase compared to that of the manual process. Moreover, the automated process reduced by 3-fold the amount of residual aggregates in the purified antibody fractions, indicating that the automated system allows the cost-efficient and timely preparation of antibodies in the 20-200mg range, and covers the requirements for early in vitro and in vivo profiling and formulation of these drug candidates. PMID:26431859

  11. The Protein Maker: an automated system for high-throughput parallel purification

    The Protein Maker instrument addresses a critical bottleneck in structural genomics by allowing automated purification and buffer testing of multiple protein targets in parallel with a single instrument. Here, the use of this instrument to (i) purify multiple influenza-virus proteins in parallel for crystallization trials and (ii) identify optimal lysis-buffer conditions prior to large-scale protein purification is described. The Protein Maker is an automated purification system developed by Emerald BioSystems for high-throughput parallel purification of proteins and antibodies. This instrument allows multiple load, wash and elution buffers to be used in parallel along independent lines for up to 24 individual samples. To demonstrate its utility, its use in the purification of five recombinant PB2 C-terminal domains from various subtypes of the influenza A virus is described. Three of these constructs crystallized and one diffracted X-rays to sufficient resolution for structure determination and deposition in the Protein Data Bank. Methods for screening lysis buffers for a cytochrome P450 from a pathogenic fungus prior to upscaling expression and purification are also described. The Protein Maker has become a valuable asset within the Seattle Structural Genomics Center for Infectious Disease (SSGCID) and hence is a potentially valuable tool for a variety of high-throughput protein-purification applications

  12. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    Hon Ming Yip; John C. S. Li; Kai Xie; Xin Cui; Agrim Prasad; Qiannan Gao; Chi Chiu Leung; Lam, Raymond H. W.


    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet...

  13. Anthropology and cultural neuroscience: creating productive intersections in parallel fields.

    Brown, R A; Seligman, R


    Partly due to the failure of anthropology to productively engage the fields of psychology and neuroscience, investigations in cultural neuroscience have occurred largely without the active involvement of anthropologists or anthropological theory. Dramatic advances in the tools and findings of social neuroscience have emerged in parallel with significant advances in anthropology that connect social and political-economic processes with fine-grained descriptions of individual experience and behavior. We describe four domains of inquiry that follow from these recent developments, and provide suggestions for intersections between anthropological tools - such as social theory, ethnography, and quantitative modeling of cultural models - and cultural neuroscience. These domains are: the sociocultural construction of emotion, status and dominance, the embodiment of social information, and the dual social and biological nature of ritual. Anthropology can help locate unique or interesting populations and phenomena for cultural neuroscience research. Anthropological tools can also help "drill down" to investigate key socialization processes accountable for cross-group differences. Furthermore, anthropological research points at meaningful underlying complexity in assumed relationships between social forces and biological outcomes. Finally, ethnographic knowledge of cultural content can aid with the development of ecologically relevant stimuli for use in experimental protocols. PMID:19874960

  14. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Cooke, Daniel; Rushton, Nelson


    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  15. Digital microfluidics for automated hanging drop cell spheroid culture.

    Aijian, Andrew P; Garrell, Robin L


    Cell spheroids are multicellular aggregates, grown in vitro, that mimic the three-dimensional morphology of physiological tissues. Although there are numerous benefits to using spheroids in cell-based assays, the adoption of spheroids in routine biomedical research has been limited, in part, by the tedious workflow associated with spheroid formation and analysis. Here we describe a digital microfluidic platform that has been developed to automate liquid-handling protocols for the formation, maintenance, and analysis of multicellular spheroids in hanging drop culture. We show that droplets of liquid can be added to and extracted from through-holes, or "wells," and fabricated in the bottom plate of a digital microfluidic device, enabling the formation and assaying of hanging drops. Using this digital microfluidic platform, spheroids of mouse mesenchymal stem cells were formed and maintained in situ for 72 h, exhibiting good viability (>90%) and size uniformity (% coefficient of variation digital microfluidic platform provides a viable tool for automating cell spheroid culture and analysis. PMID:25510471


    Volkov Andrey Anatol'evich


    Full Text Available This article covers data transfer processes in the automated system of parallel design and construction. The authors consider the structure of reports used by contractors and clients when large-scale projects are implemented. All necessary items of information are grouped into three levels, and each level is described by certain attributes. The authors drive a lot of attention to the integrated operational schedule as it is the main tool of project management. Some recommendations concerning the forms and the content of reports are presented. Integrated automation of all operations is a necessary condition for the successful implementation of the new concept. The technical aspect of the notion of parallel design and construction also includes the client-to-server infrastructure that brings together all process implemented by the parties involved into projects. This approach should be taken into consideration in the course of review of existing codes and standards to eliminate any inconsistency between the construction legislation and the practical experience of engineers involved into the process.

  17. Fully Automated Design of Super-High-Rise Building Structures by a Hybrid AI Model on a Massively Parallel Machine

    Adeli, Hojjat; Park, H. S.


    This article presents an innovative research project (sponsored by the National Science Foundation, the American Iron and Steel Institute, and the American Institute of Steel Construction) where computationally elegant algorithms based on the integration of a novel connectionist computing model, mathematical optimization, and a massively parallel computer architecture are used to automate the complex process of engineering design.

  18. Establishment of automated culture system for murine induced pluripotent stem cells

    Koike Hiroyuki


    Full Text Available Abstract Background Induced pluripotent stem (iPS cells can differentiate into any cell type, which makes them an attractive resource in fields such as regenerative medicine, drug screening, or in vitro toxicology. The most important prerequisite for these industrial applications is stable supply and uniform quality of iPS cells. Variation in quality largely results from differences in handling skills between operators in laboratories. To minimize these differences, establishment of an automated iPS cell culture system is necessary. Results We developed a standardized mouse iPS cell maintenance culture, using an automated cell culture system housed in a CO2 incubator commonly used in many laboratories. The iPS cells propagated in a chamber uniquely designed for automated culture and showed specific colony morphology, as for manual culture. A cell detachment device in the system passaged iPS cells automatically by dispersing colonies to single cells. In addition, iPS cells were passaged without any change in colony morphology or expression of undifferentiated stem cell markers during the 4 weeks of automated culture. Conclusions Our results show that use of this compact, automated cell culture system facilitates stable iPS cell culture without obvious effects on iPS cell pluripotency or colony-forming ability. The feasibility of iPS cell culture automation may greatly facilitate the use of this versatile cell source for a variety of biomedical applications.

  19. An Extended Case Study Methoology for Investigating Influence of Cultural, Organizational, and Automation Factors on Human-Automation Trust

    Koltai, Kolina Sun; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Johnson, Walter; Cacanindin, Artemio


    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Forces newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the cases politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerabilityhigh risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  20. Automated integration of genomic physical mapping data via parallel simulated annealing

    Slezak, T.


    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  1. Automated regenerable microarray-based immunoassay for rapid parallel quantification of mycotoxins in cereals.

    Oswald, S; Karsunke, X Y Z; Dietrich, R; Märtlbauer, E; Niessner, R; Knopp, D


    An automated flow-through multi-mycotoxin immunoassay using the stand-alone Munich Chip Reader 3 platform and reusable biochips was developed and evaluated. This technology combines a unique microarray, prepared by covalent immobilization of target analytes or derivatives on diamino-poly(ethylene glycol) functionalized glass slides, with a dedicated chemiluminescence readout by a CCD camera. In a first stage, we aimed for the parallel detection of aflatoxins, ochratoxin A, deoxynivalenol, and fumonisins in cereal samples in a competitive indirect immunoassay format. The method combines sample extraction with methanol/water (80:20, v/v), extract filtration and dilution, and immunodetection using horseradish peroxidase-labeled anti-mouse IgG antibodies. The total analysis time, including extraction, extract dilution, measurement, and surface regeneration, was 19 min. The prepared microarray chip was reusable for at least 50 times. Oat extract revealed itself as a representative sample matrix for preparation of mycotoxin standards and determination of different types of cereals such as oat, wheat, rye, and maize polenta at relevant concentrations according to the European Commission regulation. The recovery rates of fortified samples in different matrices, with 55-80 and 58-79%, were lower for the better water-soluble fumonisin B1 and deoxynivalenol and with 127-132 and 82-120% higher for the more unpolar aflatoxins and ochratoxin A, respectively. Finally, the results of wheat samples which were naturally contaminated with deoxynivalenol were critically compared in an interlaboratory comparison with data obtained from microtiter plate ELISA, aokinmycontrol® method, and liquid chromatography-mass spectrometry and found to be in good agreement. PMID:23620369

  2. Parallel worlds : art and sport in contemporary culture

    Tainio, Matti


    This research maps the relationships between art and sport through various perspectives using a multidisciplinary approach. In addition, three artistic projects have been included in the research. The research produces a reasoned proposition why art and sport should be seen similar practices in contemporary culture and why this perspective is beneficial. In everyday view art and sport seem opposite cultural practices, but by adopting an appropriate view similarities can be detected. In ord...

  3. Miniaturized Mass-Spectrometry-Based Analysis System for Fully Automated Examination of Conditioned Cell Culture Media

    Weber, E.; Pinkse, M.W.H.; Bener-Aksam, E.; Vellekoop, M.J.; Verhaert, P.D.E.M.


    We present a fully automated setup for performing in-line mass spectrometry (MS) analysis of conditioned media in cell cultures, in particular focusing on the peptides therein. The goal is to assess peptides secreted by cells in different culture conditions. The developed system is compatible with M

  4. Impact of Implementation of an Automated Liquid Culture System on Diagnosis of Tuberculous Pleurisy

    Lee, Byung Hee; Yoon, Seong Hoon; Yeo, Hye Ju; Kim, Dong Wan; Lee, Seung Eun; Cho, Woo Hyun; Lee, Su Jin; Kim, Yun Seong; Jeon, Doosoo


    This study was conducted to evaluate the impact of implementation of an automated liquid culture system on the diagnosis of tuberculous pleurisy in an HIV-uninfected patient population. We retrospectively compared the culture yield, time to positivity, and contamination rate of pleural effusion samples in the BACTEC Mycobacteria Growth Indicator Tube 960 (MGIT) and Ogawa media among patients with tuberculous pleurisy. Out of 104 effusion samples, 43 (41.3%) were culture positive on either the...

  5. Automated Detection of Soma Location and Morphology in Neuronal Network Cultures

    Burcin Ozcan; Pooran Negi; Fernanda Laezza; Manos Papadakis; Demetrio Labate


    Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS), where the extraction of multiple morphological features of neurons ...

  6. "Parallel Leadership in an "Unparallel" World"--Cultural Constraints on the Transferability of Western Educational Leadership Theories across Cultures

    Goh, Jonathan Wee Pin


    With the global economy becoming more integrated, the issues of cross-cultural relevance and transferability of leadership theories and practices have become increasingly urgent. Drawing upon the concept of parallel leadership in schools proposed by Crowther, Kaagan, Ferguson, and Hann as an example, the purpose of this paper is to examine the…

  7. NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images

    Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.


    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based ...

  8. Automated motion correction using parallel-strip registration for wide-field en face OCT angiogram

    Zang, Pengxiao; Liu, Gangjun; Zhang, Miao; Dongye, Changlei; Wang, Jie; Pechauer, Alex D.; Hwang, Thomas S.; Wilson, David J.; Huang, David; Li, Dengwang


    We propose an innovative registration method to correct motion artifacts for wide-field optical coherence tomography angiography (OCTA) acquired by ultrahigh-speed swept-source OCT (>200 kHz A-scan rate). Considering that the number of A-scans along the fast axis is much higher than the number of positions along slow axis in the wide-field OCTA scan, a non-orthogonal scheme is introduced. Two en face angiograms in the vertical priority (2 y-fast) are divided into microsaccade-free parallel strips. A gross registration based on large vessels and a fine registration based on small vessels are sequentially applied to register parallel strips into a composite image. This technique is extended to automatically montage individual registered, motion-free angiograms into an ultrawide-field view. PMID:27446709

  9. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin


    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time. PMID:25650073

  10. Two-dimensional parallel array technology as a new approach to automated combinatorial solid-phase organic synthesis

    Brennan; Biddison; Frauendorf; Schwarcz; Keen; Ecker; Davis; Tinder; Swayze


    An automated, 96-well parallel array synthesizer for solid-phase organic synthesis has been designed and constructed. The instrument employs a unique reagent array delivery format, in which each reagent utilized has a dedicated plumbing system. An inert atmosphere is maintained during all phases of a synthesis, and temperature can be controlled via a thermal transfer plate which holds the injection molded reaction block. The reaction plate assembly slides in the X-axis direction, while eight nozzle blocks holding the reagent lines slide in the Y-axis direction, allowing for the extremely rapid delivery of any of 64 reagents to 96 wells. In addition, there are six banks of fixed nozzle blocks, which deliver the same reagent or solvent to eight wells at once, for a total of 72 possible reagents. The instrument is controlled by software which allows the straightforward programming of the synthesis of a larger number of compounds. This is accomplished by supplying a general synthetic procedure in the form of a command file, which calls upon certain reagents to be added to specific wells via lookup in a sequence file. The bottle position, flow rate, and concentration of each reagent is stored in a separate reagent table file. To demonstrate the utility of the parallel array synthesizer, a small combinatorial library of hydroxamic acids was prepared in high throughput mode for biological screening. Approximately 1300 compounds were prepared on a 10 μmole scale (3-5 mg) in a few weeks. The resulting crude compounds were generally >80% pure, and were utilized directly for high throughput screening in antibacterial assays. Several active wells were found, and the activity was verified by solution-phase synthesis of analytically pure material, indicating that the system described herein is an efficient means for the parallel synthesis of compounds for lead discovery. Copyright 1998 John Wiley & Sons, Inc. PMID:10099494

  11. Slavic and Kazakh Folklore Calendar: Typological and Ethno-Cultural Parallels

    Galina Vlasova


    Full Text Available The study of multi-ethnic folk typology in the ethno-cultural region of Kazakhstan is of fundamental importance in the context of ethno-cultural typological parallels identifying the holiday calendar and rituals. The mechanism of folk typology is observed in ritual structures that compare the Slavic and Kazakh folklore calendars. There are typological parallels between all components in different Kazakhstan ethnic group celebrations: texts, rites, rituals, and cults. The Kazakh and Slavic calendar systems have a collective character as functional and are passed down from generation to generation. The entire annual cycle of Eurasian festivals is based on the collective existence principle. The Slavic holiday calendar represents a dual faith synthesis of pagan and Christian entities while the Kazakh holiday calendar focuses on the connection of the pagan and Muslim principles. Typologically, similar elements of Slavic and Kazakh holidays include structural relatedness, calendar confinement, similar archetypical rituals, and ceremonial models. Slavic and Kazakh ethnic and cultural contacts are reflected in the joint celebrations, in interethnic borrowing practices, rituals, games, and in Russian and Kazakh song performances by representatives of different ethnic groups. Field observations of Kazakh folklorists suggest the continuing existence of joint Nauryz and Shrovetide celebration traditions. The folklore situation in Kazakhstan demonstrates both the different stages of closely related culture innovation of the Eastern Slavs and the typological relationship and bilateral borrowing through contact with unrelated Turkic ethnic groups. The typological and ethno-cultural parallels as well as positive features of this holiday make it a universal phenomenon important for all members of a social or ethnic group.

  12. Influence of Cultural, Organizational, and Automation Capability on Human Automation Trust: A Case Study of Auto-GCAS Experimental Test Pilots

    Koltai, Kolina; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Cacanindin, Artemio; Johnson, Walter; Lyons, Joseph


    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Force's newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the case's politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerability/ high risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  13. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,


    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  14. Parallel expression of synaptophysin and evoked neurotransmitter release during development of cultured neurons

    Ehrhart-Bornstein, M; Treiman, M; Hansen, Gert Helge;


    Primary cultures of GABAergic cerebral cortex neurons and glutamatergic cerebellar granule cells were used to study the expression of synaptophysin, a synaptic vesicle marker protein, along with the ability of each cell type to release neurotransmitter upon stimulation. The synaptophysin expression...... and neurotransmitter release were measured in each of the culture types as a function of development for up to 8 days in vitro, using the same batch of cells for both sets of measurements to obtain optimal comparisons. The content and the distribution of synaptophysin in the developing cells were...... assessed by quantitative immunoblotting and light microscope immunocytochemistry, respectively. In both cell types, a close parallelism was found between the temporal pattern of development in synaptophysin expression and neurotransmitter release. This temporal pattern differed between the two types of...

  15. a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation

    Kıvılcım, C. Ö.; Duran, Z.


    The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.

  16. Automated sample preparation in a microfluidic culture device for cellular metabolomics.

    Filla, Laura A; Sanders, Katherine L; Filla, Robert T; Edwards, James L


    Sample pretreatment in conventional cellular metabolomics entails rigorous lysis and extraction steps which increase the duration as well as limit the consistency of these experiments. We report a biomimetic cell culture microfluidic device (MFD) which is coupled with an automated system for rapid, reproducible cell lysis using a combination of electrical and chemical mechanisms. In-channel microelectrodes were created using facile fabrication methods, enabling the application of electric fields up to 1000 V cm(-1). Using this platform, average lysing times were 7.12 s and 3.03 s for chips with no electric fields and electric fields above 200 V cm(-1), respectively. Overall, the electroporation MFDs yielded a ∼10-fold improvement in lysing time over standard chemical approaches. Detection of multiple intracellular nucleotides and energy metabolites in MFD lysates was demonstrated using two different MS platforms. This work will allow for the integrated culture, automated lysis, and metabolic analysis of cells in an MFD which doubles as a biomimetic model of the vasculature. PMID:27118418

  17. Evaluation of a Multi-Parameter Sensor for Automated, Continuous Cell Culture Monitoring in Bioreactors

    Pappas, D.; Jeevarajan, A.; Anderson, M. M.


    Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments in microgravity. Measurement of cell culture medium allows for the optirn.jzation of culture conditions on orbit to maximize cell growth and minimize unnecessary exchange of medium. While several discrete sensors exist to measure culture health, a multi-parameter sensor would simplify the experimental apparatus. One such sensor, the Paratrend 7, consists of three optical fibers for measuring pH, dissolved oxygen (p02), dissolved carbon dioxide (pC02) , and a thermocouple to measure temperature. The sensor bundle was designed for intra-arterial placement in clinical patients, and potentially can be used in NASA's Space Shuttle and International Space Station biotechnology program bioreactors. Methods: A Paratrend 7 sensor was placed at the outlet of a rotating-wall perfused vessel bioreactor system inoculated with BHK-21 (baby hamster kidney) cells. Cell culture medium (GTSF-2, composed of 40% minimum essential medium, 60% L-15 Leibovitz medium) was manually measured using a bench top blood gas analyzer (BGA, Ciba-Corning). Results: A Paratrend 7 sensor was used over a long-term (>120 day) cell culture experiment. The sensor was able to track changes in cell medium pH, p02, and pC02 due to the consumption of nutrients by the BHK-21. When compared to manually obtained BGA measurements, the sensor had good agreement for pH, p02, and pC02 with bias [and precision] of 0.02 [0.15], 1 mm Hg [18 mm Hg], and -4.0 mm Hg [8.0 mm Hg] respectively. The Paratrend oxygen sensor was recalibrated (offset) periodically due to drift. The bias for the raw (no offset or recalibration) oxygen measurements was 42 mm Hg [38 mm Hg]. The measured response (rise) time of the sensor was 20 +/- 4s for pH, 81 +/- 53s for pC02, 51 +/- 20s for p02. For long-term cell culture measurements, these response times are more than adequate. Based on these findings , the Paratrend sensor could

  18. FY1995 study of low power LSI design automation software with parallel processing; 1995 nendo heiretsu shori wo katsuyoshita shodenryoku LSI muke sekkei jidoka software no kenkyu kaihatsu



    The needs for low power LSIs have rapidly increased recently. For the low power LSI development, not only new circuit technologies but also new design automation tools supporting the new technologies are indispensable. The purpose of this project is to develop a new design automation software, which is able to design new digital LSIs with much lower power than that of conventional CMOS LSIs. A new design automation software for very low power LSIs has been developed targeting the pass-transistor logic SPL, a dedicated low power circuit technology. The software includes a logic synthesis function for pass-transistor-based macrocells and a macrocell placement function. Several new algorithms have been developed for the software, e.g. BDD construction. Some of them are designed and implemented for parallel processing in order to reduce the processing time. The logic synthesis function was tested on a set of benchmarks and finally applied to a low power CPU design. The designed 8-bit CPU was fully compatible with Zilog Z-80. The power dissipation of the CPU was compared with that of commercial CMOS Z-80. At most 82% of power of CMOS was reduced by the new CPU. On the other hand, parallel processing speed up was measured on the macrocell placement function. 34 folds speed up was realized. (NEDO)

  19. Automated detection of soma location and morphology in neuronal network cultures.

    Burcin Ozcan

    Full Text Available Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS, where the extraction of multiple morphological features of neurons on large data sets is required. Existing algorithms are not very efficient when applied to the analysis of confocal image stacks of neuronal cultures. In addition to the usual difficulties associated with the processing of fluorescent images, these types of stacks contain a small number of images so that only a small number of pixels are available along the z-direction and it is challenging to apply conventional 3D filters. The algorithm we present in this paper applies a number of innovative ideas from the theory of directional multiscale representations and involves the following steps: (i image segmentation based on support vector machines with specially designed multiscale filters; (ii soma extraction and separation of contiguous somas, using a combination of level set method and directional multiscale filters. We also present an approach to extract the soma's surface morphology using the 3D shearlet transform. Extensive numerical experiments show that our algorithms are computationally efficient and highly accurate in segmenting the somas and separating contiguous ones. The algorithms presented in this paper will facilitate the development of a high-throughput quantitative platform for the study of neuronal networks for HCS applications.

  20. An Engineered Approach to Stem Cell Culture: Automating the Decision Process for Real-Time Adaptive Subculture of Stem Cells

    Ker, Dai Fei Elmer; Weiss, Lee E.; Junkers, Silvina N.; Chen, Mei; Yin, Zhaozheng; Sandbothe, Michael F.; Huh, Seung-il; Eom, Sungeun; Bise, Ryoma; Osuna-Highley, Elvira; Kanade, Takeo; Campbell, Phil G.


    Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and obj...

  1. NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images

    Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.


    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch-number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of ~60 2D images is 1.0–2.5 hours, from a folder of images to a table of numeric data. NeuronMetrics’ output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery. PMID:17270152

  2. Identifying and Quantifying Cultural Factors That Matter to the IT Workforce: An Approach Based on Automated Content Analysis

    Schmiedel, Theresa; Müller, Oliver; Debortoli, Stefan;


    their reviews. Through a regression analysis on numerical employee satisfaction ratings, we find that a culture of learning and performance orientation contributes to employee motivation, while a culture of assertiveness and gender inegalitarianism has a strong negative influence on employees......Organizational culture represents a key success factor in highly competitive environments, such as, the IT sector. Thus, IT companies need to understand what makes up a culture that fosters employee performance. While existing research typically uses self-report questionnaires to study the relation...... study builds on 112,610 online reviews of Fortune 500 IT companies collected from Glassdoor, an online platform on which current and former employees can anonymously review companies and their management. We perform an automated content analysis to identify cultural factors that employees emphasize in...

  3. Cultural Heritage: An example of graphical documentation with automated photogrammetric systems

    Giuliano, M. G.


    In the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used, in particular for the study and for the documentation of the ancient ruins. This work has been carried out during the PhD cycle that was produced the "Carta Archeologica del territorio intorno al monte Massico". The study suggests the archeological documentation of the mausoleum "Torre del Ballerino" placed in the south-west area of Falciano del Massico, along the Via Appia. The graphic documentation has been achieved by using photogrammetric system (Image Based Modeling) and by the classical survey with total station, Nikon Nivo C. The data acquisition was carried out through digital camera Canon EOS 5D Mark II with Canon EF 17-40 mm f/4L USM @ 20 mm with images snapped in RAW and corrected in Adobe Lightroom. During the data processing, the camera calibration and orientation was carried out by the software Agisoft Photoscans and the final result has allowed to achieve a scaled 3D model of the monument, imported in software MeshLab for the different view. Three orthophotos in jpg format were extracted by the model, and then were imported in AutoCAD obtaining façade's surveys.

  4. PetriJet Platform Technology: An Automated Platform for Culture Dish Handling and Monitoring of the Contents.

    Vogel, Mathias; Boschke, Elke; Bley, Thomas; Lenk, Felix


    Due to the size of the required equipment, automated laboratory systems are often unavailable or impractical for use in small- and mid-sized laboratories. However, recent developments in automation engineering provide endless possibilities for incorporating benchtop devices. Here, the authors describe the development of a platform technology to handle sealed culture dishes. The programming is based on the Petri net method and implemented via Codesys V3.5 pbF. The authors developed a system of three independent electrical driven axes capable of handling sealed culture dishes. The device performs two difference processes. First, it automatically obtains an image of every processed culture dish. Second, a server-based image analysis algorithm provides the user with several parameters of the cultivated sample on the culture dish. For demonstration purposes, the authors developed a continuous, systematic, nondestructive, and quantitative method for monitoring the growth of a hairy root culture. New results can be displayed with respect to the previous images. This system is highly accurate, and the results can be used to simulate the growth of biological cultures. The authors believe that the innovative features of this platform can be implemented, for example, in the food industry, clinical environments, and research laboratories. PMID:25787804

  5. The Effect of Culture on the Sales Process Within a Global Company. Case Company ABB Oy Distribution Automation Sales Unit.

    Kruger, Frantz


    My aim in this study was to investigate the possible differences between cultures when looking at them in the context of the sales process within a global company. If these differences did exist I would further attempt to prove that through careful analysis of the sales process, and the elements within the sales process, the associated activity within the sales process could be predicted or anticipated. I compared the activity of the ABB Distribution Automation Sales Unit (Vaasa, Finland) tow...

  6. An engineered approach to stem cell culture: automating the decision process for real-time adaptive subculture of stem cells.

    Dai Fei Elmer Ker

    Full Text Available Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and objectively determine the appropriate time for cell passaging. Optimal stem cell culturing that maintains cell pluripotency while maximizing cell yields will be especially important for efficient, cost-effective stem cell-based therapies. Toward this goal we developed a real-time computer vision-based system that monitors the degree of cell confluency with a precision of 0.791±0.031 and recall of 0.559±0.043. The system consists of an automated phase-contrast time-lapse microscope and a server. Multiple dishes are sequentially imaged and the data is uploaded to the server that performs computer vision processing, predicts when cells will exceed a pre-defined threshold for optimal cell confluency, and provides a Web-based interface for remote cell culture monitoring. Human operators are also notified via text messaging and e-mail 4 hours prior to reaching this threshold and immediately upon reaching this threshold. This system was successfully used to direct the expansion of a paradigm stem cell population, C2C12 cells. Computer-directed and human-directed control subcultures required 3 serial cultures to achieve the theoretical target cell yield of 50 million C2C12 cells and showed no difference for myogenic and osteogenic differentiation. This automated vision-based system has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control, and it could be integrated with current and

  7. Analysis of the disagreement between automated bioluminescence-based and culture methods for detecting significant bacteriuria, with proposals for standardizing evaluations of bacteriuria detection methods.

    Nichols, W. W.; Curtis, G D; Johnston, H H


    A fully automated method for detecting significant bacteriuria is described which uses firefly luciferin and luciferase to detect bacterial ATP in urine. The automated method was calibrated and evaluated, using 308 urine specimens, against two reference culture methods. We obtained a specificity of 0.79 and sensitivity of 0.75 using a quantitative pour plate reference test and a specificity of 0.79 and a sensitivity of 0.90 using a semiquantitative standard loop reference test. The majority o...

  8. Design and Performance of an Automated Bioreactor for Cell Culture Experiments in a Microgravity Environment

    Kim, Youn-Kyu; Park, Seul-Hyun; Lee, Joo-Hee; Choi, Gi-Hyuk


    In this paper, we describe the development of a bioreactor for a cell-culture experiment on the International Space Station (ISS). The bioreactor is an experimental device for culturing mouse muscle cells in a microgravity environment. The purpose of the experiment was to assess the impact of microgravity on the muscles to address the possibility of longterm human residence in space. After investigation of previously developed bioreactors, and analysis of the requirements for microgravity cell culture experiments, a bioreactor design is herein proposed that is able to automatically culture 32 samples simultaneously. This reactor design is capable of automatic control of temperature, humidity, and culture-medium injection rate; and satisfies the interface requirements of the ISS. Since bioreactors are vulnerable to cell contamination, the medium-circulation modules were designed to be a completely replaceable, in order to reuse the bioreactor after each experiment. The bioreactor control system is designed to circulate culture media to 32 culture chambers at a maximum speed of 1 ml/min, to maintain the temperature of the reactor at 36°C, and to keep the relative humidity of the reactor above 70%. Because bubbles in the culture media negatively affect cell culture, a de-bubbler unit was provided to eliminate such bubbles. A working model of the reactor was built according to the new design, to verify its performance, and was used to perform a cell culture experiment that confirmed the feasibility of this device.

  9. The performance of fully automated urine analysis results for predicting the need of urine culture test

    Hatice Yüksel


    Full Text Available Objectives: Urinalysis and urine culture are most common tests for diagnosis of urinary tract infections. The aim of our study is to examine the diagnostic performance of urine analysis and the role of urine analysis to determine the requirements for urine culture. Methods: Urine culture and urine analysis results of 362 patients were retrospectively analyzed. Culture results were taken as a reference for chemical and microscopic examination of urine and diagnostic accuracy of the test parameters, that may be a marker for urinary tract infection, and the performance of urine analysis were calculated for predicting the urine culture requirements. Results: A total of 362 urine culture results of patients were evaluated and 67% of them were negative. The results of leukocyte esterase and nitrite in chemical analysis and leukocytes and bacteria in microscopic analysis were normal in 50.4% of culture negative urines. In diagnostic accuracy calculations, leukocyte esterase (86.1% and microscopy leukocytes (88.0% were found with high sensitivity, nitrite (95.4% and bacteria (86.6% were found with high specificity. The area under the curve was calculated as 0.852 in ROC analysis for microscopic examination for leukocytes. Conclusion: Full-automatic urine devices can provide sufficient diagnostic accuracy for urine analysis. The evaluation of urine analysis results in an effective way can predict the necessity for urine culture requests and especially may contribute to a reduction in the work load and cost. J Clin Exp Invest 2014; 5 (2: 286-289

  10. Characterization and Classification of Adherent Cells in Monolayer Culture using Automated Tracking and Evolutionary Algorithms

    Zhang, Z.; Bedder, M; Smith, S L; Walker, D; Shabir, S.; Southgate, J


    This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate normal human uro-epithelial cell cultures exposed to different concentrations of adenosine triphosphate (ATP) and a selective purinergic P2X antagonist (PPADS), acquired over a 24hour period. Subsequent analysis following feature extraction demonstrated the ability of the technique t...

  11. Attempts to Automate the Process of Generation of Orthoimages of Objects of Cultural Heritage

    Markiewicz, J. S.; Podlasiak, P.; Zawieska, D.


    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. The orthoimage is a cartometric form of photographic presentation of information in the two-dimensional reference system. The paper will discuss the issue of automation of the orthoimage generation basing on the TLS data and digital images. At present attempts are made to apply modern technologies not only for the needs of surveys, but also during the data processing. This paper will present attempts aiming at utilisation of appropriate algorithms and the author's application for automatic generation of the projection plane, for the needs of acquisition of intensity orthoimages from the TLS data. Such planes are defined manually in the majority of popular TLS data processing applications. A separate issue related to the RGB image generation is the orientation of digital images in relation to scans. It is important, in particular in such cases when scans and photographs are not taken simultaneously. This paper will present experiments concerning the utilisation of the SIFT algorithm for automatic matching of intensity orthoimages of the intensity and digital (RGB) photographs. Satisfactory results of the process of automation, as well as in relation to the quality of resulting orthoimages have been obtained.

  12. Evaluation of the Paratrend Multi-Analyte Sensor for Potential Utilization in Long-Duration Automated Cell Culture Monitoring

    Hwang, Emma Y.; Pappas, Dimitri; Jeevarajan, Antony S.; Anderson, Melody M.


    BACKGROUND: Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments. While several single-analyte sensors exist to measure culture health, a multi-analyte sensor would simplify the cell culture system. One such multi-analyte sensor, the Paratrend 7 manufactured by Diametrics Medical, consists of three optical fibers for measuring pH, dissolved carbon dioxide (pCO(2)), dissolved oxygen (pO(2)), and a thermocouple to measure temperature. The sensor bundle was designed for intra-vascular measurements in clinical settings, and can be used in bioreactors operated both on the ground and in NASA's Space Shuttle and International Space Station (ISS) experiments. METHODS: A Paratrend 7 sensor was placed at the outlet of a bioreactor inoculated with BHK-21 (baby hamster kidney) cells. The pH, pCO(2), pO(2), and temperature data were transferred continuously to an external computer. Cell culture medium, manually extracted from the bioreactor through a sampling port, was also assayed using a bench top blood gas analyzer (BGA). RESULTS: Two Paratrend 7 sensors were used over a single cell culture experiment (64 days). When compared to the manually obtained BGA samples, the sensor had good agreement for pH, pCO(2), and pO(2) with bias (and precision) 0.005(0.024), 8.0 mmHg (4.4 mmHg), and 11 mmHg (17 mmHg), respectively for the first two sensors. A third Paratrend sensor (operated for 141 days) had similar agreement (0.02+/-0.15 for pH, -4+/-8 mm Hg for pCO(2), and 24+/-18 mmHg for pO(2)). CONCLUSION: The resulting biases and precisions are com- parable to Paratrend sensor clinical results. Although the pO(2) differences may be acceptable for clinically relevant measurement ranges, the O(2) sensor in this bundle may not be reliable enough for the ranges of pO(2) in these cell culture studies without periodic calibration.

  13. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    Nuez Fernando


    Full Text Available Abstract Background Expressed sequence tag (EST collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http

  14. Automated Voxel Model from Point Clouds for Structural Analysis of Cultural Heritage

    Bitelli, G.; Castellazzi, G.; D'Altri, A. M.; De Miranda, S.; Lambertini, A.; Selvaggi, I.


    In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM) of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy) that was hit by an earthquake in 2012.

  15. Scalable Transcriptome Preparation for Massive Parallel Sequencing

    Henrik Stranneheim; Beata Werne; Ellen Sherwood; Joakim Lundeberg


    BACKGROUND: The tremendous output of massive parallel sequencing technologies requires automated robust and scalable sample preparation methods to fully exploit the new sequence capacity. METHODOLOGY: In this study, a method for automated library preparation of RNA prior to massively parallel sequencing is presented. The automated protocol uses precipitation onto carboxylic acid paramagnetic beads for purification and size selection of both RNA and DNA. The automated sample preparation was co...

  16. Scalable Transcriptome Preparation for Massive Parallel Sequencing

    Stranneheim, Henrik; Werne, Beata; Sherwood, Ellen; Lundeberg, Joakim


    Background The tremendous output of massive parallel sequencing technologies requires automated robust and scalable sample preparation methods to fully exploit the new sequence capacity. Methodology In this study, a method for automated library preparation of RNA prior to massively parallel sequencing is presented. The automated protocol uses precipitation onto carboxylic acid paramagnetic beads for purification and size selection of both RNA and DNA. The automated sample preparation was comp...

  17. Accurate, precise modeling of cell proliferation kinetics from time-lapse imaging and automated image analysis of agar yeast culture arrays

    Zhao Lue


    Full Text Available Abstract Background Genome-wide mutant strain collections have increased demand for high throughput cellular phenotyping (HTCP. For example, investigators use HTCP to investigate interactions between gene deletion mutations and additional chemical or genetic perturbations by assessing differences in cell proliferation among the collection of 5000 S. cerevisiae gene deletion strains. Such studies have thus far been predominantly qualitative, using agar cell arrays to subjectively score growth differences. Quantitative systems level analysis of gene interactions would be enabled by more precise HTCP methods, such as kinetic analysis of cell proliferation in liquid culture by optical density. However, requirements for processing liquid cultures make them relatively cumbersome and low throughput compared to agar. To improve HTCP performance and advance capabilities for quantifying interactions, YeastXtract software was developed for automated analysis of cell array images. Results YeastXtract software was developed for kinetic growth curve analysis of spotted agar cultures. The accuracy and precision for image analysis of agar culture arrays was comparable to OD measurements of liquid cultures. Using YeastXtract, image intensity vs. biomass of spot cultures was linearly correlated over two orders of magnitude. Thus cell proliferation could be measured over about seven generations, including four to five generations of relatively constant exponential phase growth. Spot area normalization reduced the variation in measurements of total growth efficiency. A growth model, based on the logistic function, increased precision and accuracy of maximum specific rate measurements, compared to empirical methods. The logistic function model was also more robust against data sparseness, meaning that less data was required to obtain accurate, precise, quantitative growth phenotypes. Conclusion Microbial cultures spotted onto agar media are widely used for genotype

  18. Organizational changes and automation: By means of a customer-oriented policy the so-called 'island culture' disappears: Part 2

    Automation offers great opportunities in the efforts of energy utilities in the Netherlands to reorganize towards more customer-oriented businesses. However, automation in itself is not enough. First, the organizational structure has to be changed considerably. Various energy utilities have already started on it. The restructuring principle is the same everywhere, but the way it is implemented differs widely. In this article attention is paid to different customer information systems. These systems can put an end to the so-called island culture within the energy utility organizations. The systems discussed are IRD of Systema and RIVA of SAP (both German software businesses), and two Dutch systems: Numis-2000 of Multihouse and KIS/400 of NUON Info-Systemen

  19. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes common binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory[MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ)

  20. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina Data.

    Mohan A V S K Katta

    Full Text Available Rapid popularity and adaptation of next generation sequencing (NGS approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1 (, for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2, SNP calling (SAMtools and other utilities (bedtools towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe ( in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL and for rapid quality control analysis of large-scale next generation sequencing (Illumina data.

  1. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data.

    Katta, Mohan A V S K; Khan, Aamir W; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K


    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (, for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe ( in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL and for rapid quality control analysis of large-scale next generation sequencing (Illumina) data. PMID:26460497

  2. Capillary electrophoresis for automated on-line monitoring of suspension cultures: Correlating cell density, nutrients and metabolites in near real-time.

    Alhusban, Ala A; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M


    Increasingly stringent demands on the production of biopharmaceuticals demand monitoring of process parameters that impact on their quality. We developed an automated platform for on-line, near real-time monitoring of suspension cultures by integrating microfluidic components for cell counting and filtration with a high-resolution separation technique. This enabled the correlation of the growth of a human lymphocyte cell line with changes in the essential metabolic markers, glucose, glutamine, leucine/isoleucine and lactate, determined by Sequential Injection-Capillary Electrophoresis (SI-CE). Using 8.1 mL of media (41 μL per run), the metabolic status and cell density were recorded every 30 min over 4 days. The presented platform is flexible, simple and automated and allows for fast, robust and sensitive analysis with low sample consumption and high sample throughput. It is compatible with up- and out-scaling, and as such provides a promising new solution to meet the future demands in process monitoring in the biopharmaceutical industry. PMID:27114228

  3. Evaluation of an automated rapid diagnostic assay for detection of Gram-negative bacteria and their drug-resistance genes in positive blood cultures.

    Masayoshi Tojo

    Full Text Available We evaluated the performance of the Verigene Gram-Negative Blood Culture Nucleic Acid Test (BC-GN; Nanosphere, Northbrook, IL, USA, an automated multiplex assay for rapid identification of positive blood cultures caused by 9 Gram-negative bacteria (GNB and for detection of 9 genes associated with β-lactam resistance. The BC-GN assay can be performed directly from positive blood cultures with 5 minutes of hands-on and 2 hours of run time per sample. A total of 397 GNB positive blood cultures were analyzed using the BC-GN assay. Of the 397 samples, 295 were simulated samples prepared by inoculating GNB into blood culture bottles, and the remaining were clinical samples from 102 patients with positive blood cultures. Aliquots of the positive blood cultures were tested by the BC-GN assay. The results of bacterial identification between the BC-GN assay and standard laboratory methods were as follows: Acinetobacter spp. (39 isolates for the BC-GN assay/39 for the standard methods, Citrobacter spp. (7/7, Escherichia coli (87/87, Klebsiella oxytoca (13/13, and Proteus spp. (11/11; Enterobacter spp. (29/30; Klebsiella pneumoniae (62/72; Pseudomonas aeruginosa (124/125; and Serratia marcescens (18/21; respectively. From the 102 clinical samples, 104 bacterial species were identified with the BC-GN assay, whereas 110 were identified with the standard methods. The BC-GN assay also detected all β-lactam resistance genes tested (233 genes, including 54 bla(CTX-M, 119 bla(IMP, 8 bla(KPC, 16 bla(NDM, 24 bla(OXA-23, 1 bla(OXA-24/40, 1 bla(OXA-48, 4 bla(OXA-58, and 6 blaVIM. The data shows that the BC-GN assay provides rapid detection of GNB and β-lactam resistance genes in positive blood cultures and has the potential to contributing to optimal patient management by earlier detection of major antimicrobial resistance genes.

  4. A comprehensive and precise quantification of the calanoid copepod Acartia tonsa (Dana) for intensive live feed cultures using an automated ZooImage system

    Vu, Minh Thi Thuy; Jepsen, Per Meyer; Hansen, Benni Winding


    ignored. In this study, we propose a novel method for highly precise classification of development stages and biomass of A. tonsa, in intensive live feed cultures, using an automated ZooImage system, a freeware image analysis. We successfully created a training set of 13 categories, including 7 copepod...... and 6 non-copepod (debris) groups. ZooImage used this training set for automatic discrimination through a random forest algorithm with the general accuracy of 92.8%. The ZooImage showed no significant difference in classifying solitary eggs, or mixed nauplii stages and copepodites compared to personal...... microscope observation. Furthermore, ZooImage was also adapted for automatic estimation of A. tonsa biomass. This is the first study that has successfully applied ZooImage software which enables fast and reliable quantification of the development stages and the biomass of A. tonsa. As a result, relevant...

  5. M2m Automation: Matlab-To-Map Reduce Automation

    Archana C S


    Full Text Available Abstract- MapReduce is a very popular parallel programming model for cloud computing platforms, and has become an effective method for processing massive data by using a cluster of computers. Program language -to-MapReduce Automator is a possible solution to help traditional programmers easily deploy an application to cloud systems through translating sequential codes to MapReduce codes.M2M Automation mainly focuses on automating numerical computations by using hadoop at the back end. M2M automates Hadoop, for faster execution of Matlab commands using MapReduce code.

  6. Spheroid formation of human thyroid cancer cells in an automated culturing system during the Shenzhou-8 Space mission.

    Pietsch, Jessica; Ma, Xiao; Wehland, Markus; Aleshcheva, Ganna; Schwarzwälder, Achim; Segerer, Jürgen; Birlem, Maria; Horn, Astrid; Bauer, Johann; Infanger, Manfred; Grimm, Daniela


    Human follicular thyroid cancer cells were cultured in Space to investigate the impact of microgravity on 3D growth. For this purpose, we designed and constructed a cell container that can endure enhanced physical forces, is connected to fluid storage chambers, performs media changes and cell harvesting automatically and supports cell viability. The container consists of a cell suspension chamber, two reserve tanks for medium and fixative and a pump for fluid exchange. The selected materials proved durable, non-cytotoxic, and did not inactivate RNAlater. This container was operated automatically during the unmanned Shenzhou-8 Space mission. FTC-133 human follicular thyroid cancer cells were cultured in Space for 10 days. Culture medium was exchanged after 5 days in Space and the cells were fixed after 10 days. The experiment revealed a scaffold-free formation of extraordinary large three-dimensional aggregates by thyroid cancer cells with altered expression of EGF and CTGF genes under real microgravity. PMID:23866977

  7. Automated Storage Retrieval System (ASRS) Role Towards Achievement of Safety Objective and Safety Culture in Radioactive Storage Facilities

    Waste Technology Development Centre (WasTeC) has been awarded with quality management system ISO 9001:2000 in June 2004 or now known as ISO 9001:2008. The scope of the unit's ISO certification is radioactive waste management and storage of radioactive material. To meet the objectives and requirements ISO 9001:2008, WasTeC has started a project known as Automated Storage and Retrieval System (ASRS). ASRS is a computing controlled method for automatically depositing and retrieving waste from defined locations. The system is used to replace the existing process of storage and retrieval of radioactive waste at storage facility at block 33.The main objective of this project is to reduced the radiation exposure to the worker and potential forklift accident occur during storage and retrieval of the radioactive waste. By using the ASRS system, WasTeC/ Nuclear Malaysia can provide a safe storage of radioactive waste and the use of this system can eliminate the repeat handling and can improve productivity. (author)

  8. Parallel Programming Environment for OpenMP

    Insung Park; Michael J. Voss; Seon Wook Kim; Rudolf Eigenmann


    We present our effort to provide a comprehensive parallel programming environment for the OpenMP parallel directive language. This environment includes a parallel programming methodology for the OpenMP programming model and a set of tools (Ursa Minor and InterPol) that support this methodology. Our toolset provides automated and interactive assistance to parallel programmers in time-consuming tasks of the proposed methodology. The features provided by our tools include performance and program...

  9. 基于并行拣选的自动拣选系统品项拆分优化%SKU splitting simulation for automated picking system based on parallel picking strategy

    吴颖颖; 吴耀华


    为提高自动拣选系统的工作效率,建立了基于并行拣选的品项拆分模型。该模型的优化目标是使每个拣选机可在订单合流前完成对货物的缓存。模型采用延迟因子表示订单合流时间与货物缓存时间的差值,并通过证明得出通道延迟因子与延迟时间具有相同变化趋势的结论。为求解模型,设计了基于延迟因子的启发式拆分算法。仿真结果显示,采用启发式算法可使拣选时间缩短8.55%~11.7%。%To improve the efficiency of the automated picking system,the Stock Keeping Unit(SKU) splitting model was built based on the parallel picking.The dispensers which could finish goods buffer before order form merge were optimized.The delay factor was proposed to represent the difference between merging time and dispensing time.It was proved the delay time and the delay time of each channel had the same variation trend.To solve the model,heuristic SKU splitting algorithm based on delay factor was designed.The simulation result indicated that the picking time could be shortened by 8.55%~11.7%.

  10. Rapid detection of significant bacteriuria by use of an automated Limulus amoebocyte lysate assay.

    Jorgensen, J H; Alexander, G A


    Previous studies have demonstrated that significant gram-negative bacteriuria can be detected by using the Limulus amoebocyte lysate test. A series of 580 urine specimens were tested in parallel with the automated MS-2 (Abbott Laboratories) assay and with quantitative urine bacterial cultures. The overall ability of the MS-2 Limulus amoebocyte lysate test to correctly classify urine specimens as containing either greater than or equal to 10(5) organisms or less than 10(5) organisms per ml dur...

  11. Multiple Microfermentor Battery: a Versatile Tool for Use with Automated Parallel Cultures of Microorganisms Producing Recombinant Proteins and for Optimization of Cultivation Protocols

    Frachon, Emmanuel; Bondet, Vincent; Munier-Lehmann, Hélène; Bellalou, Jacques


    A multiple microfermentor battery was designed for high-throughput recombinant protein production in Escherichia coli. This novel system comprises eight aerated glass reactors with a working volume of 80 ml and a moving external optical sensor for measuring optical densities at 600 nm (OD600) ranging from 0.05 to 100 online. Each reactor can be fitted with miniature probes to monitor temperature, dissolved oxygen (DO), and pH. Independent temperature regulation for each vessel is obtained wit...

  12. Home Automation

    Ahmed, Zeeshan


    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  13. Application of selective hydride generation-automated cryotrapping, gas chromatography, AAS to speciation analysis of methylated arsenicals in water and cell cultures at sub-PPB levels

    Complete text of publication follows. The speciation analysis methods based on selective hydride generation- cryotrapping- gas chromatography- AAS with multiatomizer represents a viable alternative and complementary technique to approach based on separation technique (most often liquid chromatography) connected to an ICP-MS detector. HG based methods are not confined to minute sample volume and usually do not require extraction step. Therefore excellent limits of detection can be achieved with relatively simple and inexpensive instrumentation, and risk of alteration of speciation due to sample pretreatment is minimized. Only species forming volatile hydrides are available for analysis, i.e. tri- and pentavalent forms of inorganic, mono-, di- and trimethylated compounds in the case of arsenic. Since exactly these species are found in human detoxication metabolism of iAs, this method is very suitable for toxicological studies. Application of a fully automated system including the cryotrapping step will be presented. Limits of detection of the method were 21 ppt for iAs (limited by blanks) and 3-10 ppt for methylated forms (limited by signal to noise ratio). Sample throughput was approximately 8 per hour. Analytical performance of the system will be demonstrated on speciation analysis of methylated arsenicals in water reference materials with total arsenic content of 0.7-1.3 ppb. Second example is analysis of arsenic species in cell culture experiments, when the methylating cells were exposed to iAs at 0.25-0.5 μM levels. Methylated species transformed by the cells are then determined in cell lysates and cell culture medium. Notably, all forms exhibit the same sensitivity and therefore can be calibrated against single stable As form. The authors kindly acknowledge the financial support from University of North Carolina at Chapel Hill- Gillings Innovation Laboratory, Academy of Sciences of the Czech Republic (Institutional research plan No. AV0Z 40310501), Czech

  14. Parallelizing Mizar

    Urban, Josef


    This paper surveys and describes the implementation of parallelization of the Mizar proof checking and of related Mizar utilities. The implementation makes use of Mizar's compiler-like division into several relatively independent passes, with typically quite different processing speeds. The information produced in earlier (typically much faster) passes can be used to parallelize the later (typically much slower) passes. The parallelization now works by splitting the formalization into a suitable number of pieces that are processed in parallel, assembling from them together the required results. The implementation is evaluated on examples from the Mizar library, and future extensions are discussed.

  15. Parallel computations


    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  16. Evaluation of 3 automated real-time PCR (Xpert C. difficile assay, BD MAX Cdiff, and IMDx C. difficile for Abbott m2000 assay) for detecting Clostridium difficile toxin gene compared to toxigenic culture in stool specimens.

    Yoo, Jaeeun; Lee, Hyeyoung; Park, Kang Gyun; Lee, Gun Dong; Park, Yong Gyu; Park, Yeon-Joon


    We evaluated the performance of the 3 automated systems (Cepheid Xpert, BD MAX, and IMDx C. difficile for Abbott m2000) detecting Clostridium difficile toxin gene compared to toxigenic culture. Of the 254 stool specimens tested, 87 (48 slight, 35 moderate, and 4 heavy growth) were toxigenic culture positive. The overall sensitivities and specificities were 82.8% and 98.8% for Xpert, 81.6% and 95.8% for BD MAX, and 62.1% and 99.4% for IMDx, respectively. The specificity was significantly higher in IMDx than BD MAX (P= 0.03). All stool samples underwent toxin A/B enzyme immunoassay testing, and of the 254 samples, only 29 samples were positive and 2 of them were toxigenic culture negative. Considering the rapidity and high specificity of the real-time PCR assays compared to the toxigenic culture, they can be used as the first test method for C. difficile infection/colonization. PMID:26081240

  17. Parallel quicksort

    Vrto, I. (Inst. of Technical Cybernetics, Slovac Academy of Sciences, Dubravska Cesta 9, 842-37 Bratislava (CS)); Chelbus, B.S. (Dept. of Computer Science, Univ. of California, Riverside, CA (US))


    This paper reports on the development of a parallel version of quicksort on a CRCW PRAM. The algorithm uses n processors and a linear space to sort n keys in the expected time O(log n) with large probability.

  18. Library Automation

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.


    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  19. Computer automation and artificial intelligence

    Rapid advances in computing, resulting from micro chip revolution has increased its application manifold particularly for computer automation. Yet the level of automation available, has limited its application to more complex and dynamic systems which require an intelligent computer control. In this paper a review of Artificial intelligence techniques used to augment automation is presented. The current sequential processing approach usually adopted in artificial intelligence has succeeded in emulating the symbolic processing part of intelligence, but the processing power required to get more elusive aspects of intelligence leads towards parallel processing. An overview of parallel processing with emphasis on transputer is also provided. A Fuzzy knowledge based controller for amination drug delivery in muscle relaxant anesthesia on transputer is described. 4 figs. (author)

  20. Process automation

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  1. cultural

    Irene Kreutz


    Full Text Available Es un estudio cualitativo que adoptó como referencial teorico-motodológico la antropología y la etnografía. Presenta las experiencias vivenciadas por mujeres de una comunidad en el proceso salud-enfermedad, con el objetivo de comprender los determinantes sócio-culturales e históricos de las prácticas de prevención y tratamiento adoptados por el grupo cultural por medio de la entrevista semi-estructurada. Los temas que emergieron fueron: la relación entre la alimentación y lo proceso salud-enfermedad, las relaciones con el sistema de salud oficial y el proceso salud-enfermedad y lo sobrenatural. Los dados revelaron que los moradores de la comunidad investigada tienen un modo particular de explicar sus procedimientos terapéuticos. Consideramos que es papel de los profesionales de la salud en sus prácticas, la adopción de abordajes o enfoques que consideren al individuo en su dimensión sócio-cultural e histórica, considerando la enorme diversidad cultural en nuestro país.

  2. Automation of antimicrobial activity screening.

    Forry, Samuel P; Madonna, Megan C; López-Pérez, Daneli; Lin, Nancy J; Pasco, Madeleine D


    Manual and automated methods were compared for routine screening of compounds for antimicrobial activity. Automation generally accelerated assays and required less user intervention while producing comparable results. Automated protocols were validated for planktonic, biofilm, and agar cultures of the oral microbe Streptococcus mutans that is commonly associated with tooth decay. Toxicity assays for the known antimicrobial compound cetylpyridinium chloride (CPC) were validated against planktonic, biofilm forming, and 24 h biofilm culture conditions, and several commonly reported toxicity/antimicrobial activity measures were evaluated: the 50 % inhibitory concentration (IC50), the minimum inhibitory concentration (MIC), and the minimum bactericidal concentration (MBC). Using automated methods, three halide salts of cetylpyridinium (CPC, CPB, CPI) were rapidly screened with no detectable effect of the counter ion on antimicrobial activity. PMID:26970766

  3. Automated Microbial Metabolism Laboratory


    Development of the automated microbial metabolism laboratory (AMML) concept is reported. The focus of effort of AMML was on the advanced labeled release experiment. Labeled substrates, inhibitors, and temperatures were investigated to establish a comparative biochemical profile. Profiles at three time intervals on soil and pure cultures of bacteria isolated from soil were prepared to establish a complete library. The development of a strategy for the return of a soil sample from Mars is also reported.




    A new algorithm for the stabilization of (possibly turbulent, chaotic) distributed systems, governed by linear or non linear systems of equations is presented. The SPA (Stabilization Parallel Algorithm) is based on a systematic parallel decomposition of the problem (related to arbitrarily overlapping decomposition of domains) and on a penalty argument. SPA is presented here for the case of linear parabolic equations: with distrjbuted or boundary control. It extends to practically all linear and non linear evolution equations, as it will be presented in several other publications.

  5. Automation of cell line development

    Lindgren, Kristina; Salmén, Andréa; Lundgren, Mats; Bylund, Lovisa; Ebler, Åsa; Fäldt, Eric; Sörvik, Lina; Fenge, Christel; Skoging-Nyberg, Ulrica


    An automated platform for development of high producing cell lines for biopharmaceutical production has been established in order to increase throughput and reduce development costs. The concept is based on the Cello robotic system (The Automation Partnership) and covers screening for colonies and expansion of static cultures. In this study, the glutamine synthetase expression system (Lonza Biologics) for production of therapeutic monoclonal antibodies in Chinese hamster ovary cells was used ...

  6. An automated HIV-1 Env-pseudotyped virus production for global HIV vaccine trials.

    Anke Schultz

    Full Text Available BACKGROUND: Infections with HIV still represent a major human health problem worldwide and a vaccine is the only long-term option to fight efficiently against this virus. Standardized assessments of HIV-specific immune responses in vaccine trials are essential for prioritizing vaccine candidates in preclinical and clinical stages of development. With respect to neutralizing antibodies, assays with HIV-1 Env-pseudotyped viruses are a high priority. To cover the increasing demands of HIV pseudoviruses, a complete cell culture and transfection automation system has been developed. METHODOLOGY/PRINCIPAL FINDINGS: The automation system for HIV pseudovirus production comprises a modified Tecan-based Cellerity system. It covers an area of 5×3 meters and includes a robot platform, a cell counting machine, a CO(2 incubator for cell cultivation and a media refrigerator. The processes for cell handling, transfection and pseudovirus production have been implemented according to manual standard operating procedures and are controlled and scheduled autonomously by the system. The system is housed in a biosafety level II cabinet that guarantees protection of personnel, environment and the product. HIV pseudovirus stocks in a scale from 140 ml to 1000 ml have been produced on the automated system. Parallel manual production of HIV pseudoviruses and comparisons (bridging assays confirmed that the automated produced pseudoviruses were of equivalent quality as those produced manually. In addition, the automated method was fully validated according to Good Clinical Laboratory Practice (GCLP guidelines, including the validation parameters accuracy, precision, robustness and specificity. CONCLUSIONS: An automated HIV pseudovirus production system has been successfully established. It allows the high quality production of HIV pseudoviruses under GCLP conditions. In its present form, the installed module enables the production of 1000 ml of virus-containing cell

  7. Robotic platform for parallelized cultivation and monitoring of microbial growth parameters in microwell plates.

    Knepper, Andreas; Heiser, Michael; Glauche, Florian; Neubauer, Peter


    The enormous variation possibilities of bioprocesses challenge process development to fix a commercial process with respect to costs and time. Although some cultivation systems and some devices for unit operations combine the latest technology on miniaturization, parallelization, and sensing, the degree of automation in upstream and downstream bioprocess development is still limited to single steps. We aim to face this challenge by an interdisciplinary approach to significantly shorten development times and costs. As a first step, we scaled down analytical assays to the microliter scale and created automated procedures for starting the cultivation and monitoring the optical density (OD), pH, concentrations of glucose and acetate in the culture medium, and product formation in fed-batch cultures in the 96-well format. Then, the separate measurements of pH, OD, and concentrations of acetate and glucose were combined to one method. This method enables automated process monitoring at dedicated intervals (e.g., also during the night). By this approach, we managed to increase the information content of cultivations in 96-microwell plates, thus turning them into a suitable tool for high-throughput bioprocess development. Here, we present the flowcharts as well as cultivation data of our automation approach. PMID:25208534

  8. Parallel R

    McCallum, Ethan


    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  9. Towards Distributed Memory Parallel Program Analysis

    Quinlan, D; Barany, G; Panas, T


    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  10. Towards Automated Testing of Web Service Choreographies

    Besson F.M.; Leal P.M.B.; Kon F.; Goldman A; Milojicic D.


    Web service choreographies have been proposed as a decentralized scalable way of composing services in a SOA environment. In spite of all the benefi ts of choreographies, the decentralized ow of information, the parallelism, and multiple party communication restrict the automated testing of choreographies at design and runtime. The goal of our research is to adapt the automated testing techniques used by the Agile Software Development community to the SOA context. To achieve that, we seek to ...

  11. Automation Security

    Mirzoev, Dr. Timur


    Web-based Automated Process Control systems are a new type of applications that use the Internet to control industrial processes with the access to the real-time data. Supervisory control and data acquisition (SCADA) networks contain computers and applications that perform key functions in providing essential services and commodities (e.g., electricity, natural gas, gasoline, water, waste treatment, transportation) to all Americans. As such, they are part of the nation s critical infrastructu...

  12. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development

  13. Computer-Aided Parallelizer and Optimizer

    Jin, Haoqiang


    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  14. Automating the multiprocessing environment

    Arpasi, Dale J.


    An approach to automate the programming and operation of tree-structured networks of multiprocessor systems is discussed. A conceptual, knowledge-based operating environment is presented, and requirements for two major technology elements are identified as follows: (1) An intelligent information translator is proposed for implementating information transfer between dissimilar hardware and software, thereby enabling independent and modular development of future systems and promoting a language-independence of codes and information; (2) A resident system activity manager, which recognizes the systems capabilities and monitors the status of all systems within the environment, is proposed for integrating dissimilar systems into effective parallel processing resources to optimally meet user needs. Finally, key computational capabilities which must be provided before the environment can be realized are identified.

  15. Study on Parallel Computing

    Guo-Liang Chen; Guang-Zhong Sun; Yun-Quan Zhang; Ze-Yao Mo


    In this paper, we present a general survey on parallel computing. The main contents include parallel computer system which is the hardware platform of parallel computing, parallel algorithm which is the theoretical base of parallel computing, parallel programming which is the software support of parallel computing. After that, we also introduce some parallel applications and enabling technologies. We argue that parallel computing research should form an integrated methodology of "architecture - algorithm - programming - application". Only in this way, parallel computing research becomes continuous development and more realistic.

  16. Acquisition of data from on-line laser turbidimeter and calculation of some kinetic variables in computer-coupled automated fed-batch culture

    Output signals of a commercially available on-line laser turbidimeter exhibit fluctuations due to air and/or CO2 bubbles. A simple data processing algorithm and a personal computer software have been developed to smooth the noisy turbidity data acquired, and to utilize them for the on-line calculations of some kinetic variables involved in batch and fed-batch cultures of uniformly dispersed microorganisms. With this software, about 103 instantaneous turbidity data acquired over 55 s are averaged and convert it to dry cell concentration, X, every minute. Also, volume of the culture broth, V, is estimated from the averaged output data of weight loss of feed solution reservoir, W, using an electronic balance on which the reservoir is placed. Then, the computer software is used to perform linear regression analyses over the past 30 min of the total biomass, VX, the natural logarithm of the total biomass, ln(VX), and the weight loss, W, in order to calculate volumetric growth rate, d(VX)/dt, specific growth rate, μ [ = dln(VX)/dt] and the rate of W, dW/dt, every minute in a fed-batch culture. The software used to perform the first-order regression analyses of VX, ln(VX) and W was applied to batch or fed-batch cultures of Escherichia coli on minimum synthetic or natural complex media. Sample determination coefficients of the three different variables (VX, ln(VX) and W) were close to unity, indicating that the calculations are accurate. Furthermore, growth yield, Yx/s, and specific substrate consumption rate, qsc, were approximately estimated from the data, dW/dt and in a ‘balanced’ fed-batch culture of E. coli on the minimum synthetic medium where the computer-aided substrate-feeding system automatically matches well with the cell growth. (author)

  17. A Performance Analysis Tool for PVM Parallel Programs

    Chen Wang; Yin Liu; Changjun Jiang; Zhaoqing Zhang


    In this paper,we introduce the design and implementation of ParaVT,which is a visual performance analysis and parallel debugging tool.In ParaVT,we propose an automated instrumentation mechanism. Based on this mechanism,ParaVT automatically analyzes the performance bottleneck of parallel applications and provides a visual user interface to monitor and analyze the performance of parallel programs.In addition ,it also supports certain extensions.

  18. Automated Budget System

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  19. Special parallel processing workshop



    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  20. Manufacturing and automation

    Ernesto Córdoba Nieto


    The article presents concepts and definitions from different sources concerning automation. The work approaches automation by virtue of the author’s experience in manufacturing production; why and how automation prolects are embarked upon is considered. Technological reflection regarding the progressive advances or stages of automation in the production area is stressed. Coriat and Freyssenet’s thoughts about and approaches to the problem of automation and its current state are taken and e...

  1. Automation tools for flexible aircraft maintenance.

    Prentice, William J.; Drotning, William D.; Watterberg, Peter A.; Loucks, Clifford S.; Kozlowski, David M.


    This report summarizes the accomplishments of the Laboratory Directed Research and Development (LDRD) project 26546 at Sandia, during the period FY01 through FY03. The project team visited four DoD depots that support extensive aircraft maintenance in order to understand critical needs for automation, and to identify maintenance processes for potential automation or integration opportunities. From the visits, the team identified technology needs and application issues, as well as non-technical drivers that influence the application of automation in depot maintenance of aircraft. Software tools for automation facility design analysis were developed, improved, extended, and integrated to encompass greater breadth for eventual application as a generalized design tool. The design tools for automated path planning and path generation have been enhanced to incorporate those complex robot systems with redundant joint configurations, which are likely candidate designs for a complex aircraft maintenance facility. A prototype force-controlled actively compliant end-effector was designed and developed based on a parallel kinematic mechanism design. This device was developed for demonstration of surface finishing, one of many in-contact operations performed during aircraft maintenance. This end-effector tool was positioned along the workpiece by a robot manipulator, programmed for operation by the automated planning tools integrated for this project. Together, the hardware and software tools demonstrate many of the technologies required for flexible automation in a maintenance facility.

  2. Logical Inference Techniques for Loop Parallelization

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence


    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the paralleliza...... of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECT-CLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers....

  3. Parallel Programming with Intel Parallel Studio XE

    Blair-Chappell , Stephen


    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  4. Parallelism in Constraint Programming

    Rolf, Carl Christian


    Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard even in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent. Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a...

  5. Manufacturing and automation

    Ernesto Córdoba Nieto


    Full Text Available The article presents concepts and definitions from different sources concerning automation. The work approaches automation by virtue of the author’s experience in manufacturing production; why and how automation prolects are embarked upon is considered. Technological reflection regarding the progressive advances or stages of automation in the production area is stressed. Coriat and Freyssenet’s thoughts about and approaches to the problem of automation and its current state are taken and examined, especially that referring to the problem’s relationship with reconciling the level of automation with the flexibility and productivity demanded by competitive, worldwide manufacturing.

  6. Practical parallel computing

    Morse, H Stephen


    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  7. An automated swimming respirometer



    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  8. Configuration Management Automation (CMA)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  9. Intensive Culture: Religion and Social Theory in Contemporary Culture

    Lash, Scott


    Contemporary culture, today’s capitalism - our global information society - is ever expanding, is ever more extensive. And yet we seem to be experiencing a parallel phenomenon which can only be characterized as intensive.This book is dedicated to the study of such intensive culture. While extensive culture is a culture of the same: a culture of fixed equivalence; intensive culture is a culture of difference, of in-equivalence – the singular. Intensities generate what we encounter. They are vi...

  10. Parallel flow diffusion battery

    Yeh, Hsu-Chi; Cheng, Yung-Sung


    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  11. Workflow automation architecture standard

    Moshofsky, R.P.; Rohen, W.T. [Boeing Computer Services Co., Richland, WA (United States)


    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  12. Parallel logic programming systems

    Chassin De Kergommeaux, J.; Codognet, Philippe


    Parallelizing logic programming has attracted much interest in the research community, because of the intrinsic or and and parallelisms of logic programs. One research stream aims at transparent exploitation of parallelism in existing logic programming languages such as Prolog while the family of concurrent logic languages develops constructs allowing programmers to express the concurrency, that is the communication and synchronization between parallel process, inside their algorithms. This p...

  13. Parallel computing works!

    Fox, Geoffrey C; Messina, Guiseppe C


    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  14. Introduction to parallel programming

    Brawer, Steven


    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  15. Parallel adaptive wavelet collocation method for PDEs

    Nejadmalayeri, Alireza, E-mail: [FortiVenti Inc., Suite 404, 999 Canada Place, Vancouver, BC, V6C 3E2 (Canada); Vezolainen, Alexei, E-mail: [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States); Brown-Dymkoski, Eric, E-mail: [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States); Vasilyev, Oleg V., E-mail: [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States)


    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  16. Developing Parallel Programs

    Ranjan Sen


    Parallel programming is an extension of sequential programming; today, it is becoming the mainstream paradigm in day-to-day information processing. Its aim is to build the fastest programs on parallel computers. The methodologies for developing a parallelprogram can be put into integrated frameworks. Development focuses on algorithm, languages, and how the program is deployed on the parallel computer.

  17. CNTFET Parallel in Parallel out Shift Register

    T. Jayanthy

    Full Text Available In this paper, a compact model for carbon nanotube field effect transistor has been designed by considering various device parameters such as length, number of tubes, chiral vector etc. The modeled CNTFET is used to design various digital circuits in particular parallel in parallel out shift register. The results of Hspice simulation performed on the designed PIPO shift register shows superior performance over conventional MOSFET in terms of power dissipation, power delay product, size etc.

  18. Automated Parallel Computing Tools for Multicore Machines and Clusters Project

    National Aeronautics and Space Administration — We propose to improve productivity of high performance computing for applications on multicore computers and clusters. These machines built from one or more chips...

  19. Shoe-String Automation

    Duncan, M.L.


    Faced with a downsizing organization, serious budget reductions and retirement of key metrology personnel, maintaining capabilities to provide necessary services to our customers was becoming increasingly difficult. It appeared that the only solution was to automate some of our more personnel-intensive processes; however, it was crucial that the most personnel-intensive candidate process be automated, at the lowest price possible and with the lowest risk of failure. This discussion relates factors in the selection of the Standard Leak Calibration System for automation, the methods of automation used to provide the lowest-cost solution and the benefits realized as a result of the automation.

  20. Software Test Automation in Practice: Empirical Observations

    Jussi Kasurinen


    Full Text Available The objective of this industry study is to shed light on the current situation and improvement needs in software test automation. To this end, 55 industry specialists from 31 organizational units were interviewed. In parallel with the survey, a qualitative study was conducted in 12 selected software development organizations. The results indicated that the software testing processes usually follow systematic methods to a large degree, and have only little immediate or critical requirements for resources. Based on the results, the testing processes have approximately three fourths of the resources they need, and have access to a limited, but usually sufficient, group of testing tools. As for the test automation, the situation is not as straightforward: based on our study, the applicability of test automation is still limited and its adaptation to testing contains practical difficulties in usability. In this study, we analyze and discuss these limitations and difficulties.

  1. Software Test Automation in Practice: Empirical Observations

    Kari Smolander; Ossi Taipale; Jussi Kasurinen


    The objective of this industry study is to shed light on the current situation and improvement needs in software test automation. To this end, 55 industry specialists from 31 organizational units were interviewed. In parallel with the survey, a qualitative study was conducted in 12 selected software development organizations. The results indicated that the software testing processes usually follow systematic methods to a large degree, and have only little immediate or critical requirements fo...

  2. Parallel Atomistic Simulations



    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  3. Invariants for Parallel Mapping

    YIN Yajun; WU Jiye; FAN Qinshan; HUANG Kezhi


    This paper analyzes the geometric quantities that remain unchanged during parallel mapping (i.e., mapping from a reference curved surface to a parallel surface with identical normal direction). The second gradient operator, the second class of integral theorems, the Gauss-curvature-based integral theorems, and the core property of parallel mapping are used to derive a series of parallel mapping invadants or geometri-cally conserved quantities. These include not only local mapping invadants but also global mapping invari-ants found to exist both in a curved surface and along curves on the curved surface. The parallel mapping invadants are used to identify important transformations between the reference surface and parallel surfaces. These mapping invadants and transformations have potential applications in geometry, physics, biome-chanics, and mechanics in which various dynamic processes occur along or between parallel surfaces.

  4. A parallel buffer tree

    Sitchinava, Nodar; Zeh, Norbert


    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  5. Parallel digital forensics infrastructure.

    Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick


    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  6. Parallelization in Modern C++

    CERN. Geneva


    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  7. Parallelism in matrix computations

    Gallopoulos, Efstratios; Sameh, Ahmed H


    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  8. Explicit parallel programming

    Gamble, James Graham


    While many parallel programming languages exist, they rarely address programming languages from the issue of communication (implying expressability, and readability). A new language called Explicit Parallel Programming (EPP), attempts to provide this quality by separating the responsibility for the execution of run time actions from the responsibility for deciding the order in which they occur. The ordering of a parallel algorithm is specified in the new EPP language; run ti...

  9. Parallel Online Learning

    Hsu, Daniel; Karampatziakis, Nikos; Langford, John; Smola, Alex


    In this work we study parallelization of online learning, a core primitive in machine learning. In a parallel environment all known approaches for parallel online learning lead to delayed updates, where the model is updated using out-of-date information. In the worst case, or when examples are temporally correlated, delay can have a very adverse effect on the learning algorithm. Here, we analyze and present preliminary empirical results on a set of learning architectures based on a feature sh...

  10. Programming Parallel Computers

    Chandy, K. Mani


    This paper is from a keynote address to the IEEE International Conference on Computer Languages, October 9, 1988. Keynote addresses are expected to be provocative (and perhaps even entertaining), but not necessarily scholarly. The reader should be warned that this talk was prepared with these expectations in mind.Parallel computers offer the potential of great speed at low cost. The promise of parallelism is limited by the ability to program parallel machines effectively. This paper explores ...

  11. Practical Parallel Rendering

    Chalmers, Alan


    Meeting the growing demands for speed and quality in rendering computer graphics images requires new techniques. Practical parallel rendering provides one of the most practical solutions. This book addresses the basic issues of rendering within a parallel or distributed computing environment, and considers the strengths and weaknesses of multiprocessor machines and networked render farms for graphics rendering. Case studies of working applications demonstrate, in detail, practical ways of dealing with complex issues involved in parallel processing.

  12. Approach of generating parallel programs from parallelized algorithm design strategies

    WAN Jian-yi; LI Xiao-ying


    Today, parallel programming is dominated by message passing libraries, such as message passing interface (MPI). This article intends to simplify parallel programming by generating parallel programs from parallelized algorithm design strategies. It uses skeletons to abstract parallelized algorithm design strategies, as well as parallel architectures. Starting from problem specification, an abstract parallel abstract programming language+ (Apla+) program is generated from parallelized algorithm design strategies and problem-specific function definitions. By combining with parallel architectures, implicity of parallelism inside the parallelized algorithm design strategies is exploited. With implementation and transformation, C++ and parallel virtual machine (CPPVM) parallel program is finally generated. Parallelized branch and bound (B&B) algorithm design strategy and parallelized divide and conquer (D & C) algorithm design strategy are studied in this article as examples. And it also illustrates the approach with a case study.

  13. Parallel Algorithms and Patterns

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  14. Parallel Online Learning

    Hsu, Daniel; Langford, John; Smola, Alex


    In this work we study parallelization of online learning, a core primitive in machine learning. In a parallel environment all known approaches for parallel online learning lead to delayed updates, where the model is updated using out-of-date information. In the worst case, or when examples are temporally correlated, delay can have a very adverse effect on the learning algorithm. Here, we analyze and present preliminary empirical results on a set of learning architectures based on a feature sharding approach that present various tradeoffs between delay, degree of parallelism, representation power and empirical performance.

  15. CS-Studio Scan System Parallelization

    Kasemir, Kay [ORNL; Pearson, Matthew R [ORNL


    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  16. Automated stopcock actuator

    Vandehey, N. T.; O'Neil, J.P.


    Introduction We have developed a low-cost stopcock valve actuator for radiochemistry automation built using a stepper motor and an Arduino, an open-source single-board microcontroller. The con-troller hardware can be programmed to run by serial communication or via two 5–24 V digital lines for simple integration into any automation control system. This valve actuator allows for automated use of a single, disposable stopcock, providing a number of advantages over stopcock manifold systems ...

  17. The Adaptive Automation Design

    Calefato, Caterina; Montanari, Roberto; TESAURI, Francesco


    After considering the positive effects of adaptive automation implementation, this chapter focuses on two partly overlapping phenomena: on the one hand, the role of trust in automation is considered, particularly as to the effects of overtrust and mistrust in automation's reliability; on the other hand, long-term lack of exercise on specific operation may lead users to skill deterioration. As a future work, it will be interesting and challenging to explore the conjunction of adaptive automati...

  18. Service functional test automation

    Hillah, Lom Messan; Maesano, Ariele-Paolo; Rosa, Fabio; Maesano, Libero; Lettere, Marco; Fontanelli, Riccardo


    This paper presents the automation of the functional test of services (black-box testing) and services architectures (grey-box testing) that has been developed by the MIDAS project and is accessible on the MIDAS SaaS. In particular, the paper illustrates the solutions of tough functional test automation problems such as: (i) the configuration of the automated test execution system against large and complex services architectures, (ii) the constraint-based test input generation, (iii) the spec...

  19. Automated Weather Observing System

    Department of Transportation — The Automated Weather Observing System (AWOS) is a suite of sensors, which measure, collect, and disseminate weather data to help meteorologists, pilots, and flight...

  20. Laboratory Automation and Middleware.

    Riben, Michael


    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. PMID:26065792

  1. Automated cloning methods.; TOPICAL

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  2. Parallel reservoir simulator computations

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  3. Patterns For Parallel Programming

    Mattson, Timothy G; Massingill, Berna L


    From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

  4. Parallel computing works


    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  5. Proceedings of the 1988 IEEE international conference on robotics and automation. Volume 1

    These proceedings compile the papers presented at the international conference (1988) sponsored by IEEE Council on ''Robotics and Automation''. The subjects discussed were: automation and robots of nuclear power stations; algorithms of multiprocessors; parallel processing and computer architecture; and U.S. DOE research programs on nuclear power plants

  6. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.


    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  7. Automatic Performance Debugging of SPMD Parallel Programs

    Liu, Xu; Zhan, Jianfeng; Tu, Bibo; Meng, Dan


    Automatic performance debugging of parallel applications usually involves two steps: automatic detection of performance bottlenecks and uncovering their root causes for performance optimization. Previous work fails to resolve this challenging issue in several ways: first, several previous efforts automate analysis processes, but present the results in a confined way that only identifies performance problems with apriori knowledge; second, several tools take exploratory or confirmatory data analysis to automatically discover relevant performance data relationships. However, these efforts do not focus on locating performance bottlenecks or uncovering their root causes. In this paper, we design and implement an innovative system, AutoAnalyzer, to automatically debug the performance problems of single program multi-data (SPMD) parallel programs. Our system is unique in terms of two dimensions: first, without any apriori knowledge, we automatically locate bottlenecks and uncover their root causes for performance o...

  8. Library Automation Style Guide.

    Gaylord Bros., Liverpool, NY.

    This library automation style guide lists specific terms and names often used in the library automation industry. The terms and/or acronyms are listed alphabetically and each is followed by a brief definition. The guide refers to the "Chicago Manual of Style" for general rules, and a notes section is included for the convenience of individual…

  9. Automation in Warehouse Development

    Hamberg, R.; Verriet, J.


    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and support

  10. Automate functional testing

    Ramesh Kalindri


    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  11. Automation of Hubble Space Telescope Mission Operations

    Burley, Richard; Goulet, Gregory; Slater, Mark; Huey, William; Bassford, Lynn; Dunham, Larry


    On June 13, 2011, after more than 21 years, 115 thousand orbits, and nearly 1 million exposures taken, the operation of the Hubble Space Telescope successfully transitioned from 24x7x365 staffing to 815 staffing. This required the automation of routine mission operations including telemetry and forward link acquisition, data dumping and solid-state recorder management, stored command loading, and health and safety monitoring of both the observatory and the HST Ground System. These changes were driven by budget reductions, and required ground system and onboard spacecraft enhancements across the entire operations spectrum, from planning and scheduling systems to payload flight software. Changes in personnel and staffing were required in order to adapt to the new roles and responsibilities required in the new automated operations era. This paper will provide a high level overview of the obstacles to automating nominal HST mission operations, both technical and cultural, and how those obstacles were overcome.

  12. Automation in Immunohematology

    Meenu Bajpai


    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  13. Automated model building

    Caferra, Ricardo; Peltier, Nicholas


    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  14. Automation in Warehouse Development

    Verriet, Jacques


    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  15. Advances in inspection automation

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano


    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  16. Compositional C++: Compositional Parallel Programming

    Chandy, K. Mani; Kesselman, Carl


    A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms; imperative and declarative programm...

  17. Parallel nearest neighbor calculations

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  18. Detection Of Control Flow Errors In Parallel Programs At Compile Time

    Bruce P. Lester


    Full Text Available This paper describes a general technique to identify control flow errors in parallel programs, which can be automated into a compiler. The compiler builds a system of linear equations that describes the global control flow of the whole program. Solving these equations using standard techniques of linear algebra can locate a wide range of control flow bugs at compile time. This paper also describes an implementation of this control flow analysis technique in a prototype compiler for a well-known parallel programming language. In contrast to previous research in automated parallel program analysis, our technique is efficient for large programs, and does not limit the range of language features.

  19. Logical inference techniques for loop parallelization

    Oancea, Cosmin E.


    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  20. Parallel programming with PCN

    Foster, I.; Tuecke, S.


    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at (c.f. Appendix A).

  1. Parallelism and array processing

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  2. Parallels with nature


    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  3. Balanço de água por aquisição automática de dados em cultura de trigo (Triticum aestivum L. The daily water consumption of a wheat culture using atmospheric and soil data

    Celso Luiz Prevedello


    Full Text Available Utilizando técnica de aquisição automática de dados atmosféricos e de teor de água do solo, este trabalho quantificou o consumo diário de água em cultura do trigo em Latossolo Vermelho do município de Ponta Grossa, Estado do Paraná, durante o período de agosto a dezembro de 2003, procurando dar ênfase à contribuição das chuvas e dos fluxos ascendentes de água das camadas mais profundas do solo nesse consumo. Os resultados mostraram que no período monitorado: (a a lâmina média diária de água evapotranspirada pela cultura do trigo foi de 6,75 mm, com o fluxo ascendente de água no perfil de solo contribuindo com 62 % desse total; (b as taxas de evapotranspiração estimadas pelo método de Penman e pela equação do balanço hídrico (pedológico se transladaram no tempo com simetria aproximadamente igual, mas com defasagem aproximada de sete dias, como se o solo respondesse às variações impostas pela atmosfera cerca de uma semana depois; (c as chuvas tiveram efeito importante no armazenamento de água no solo, contribuindo para elevação das taxas evapotranspirativas; e (d pelo fato de o potencial mátrico médio na zona das raízes ter-se apresentado próximo do limite crítico para a cultura, concluiu-se que a irrigação poderia produzir impactos potencialmente positivos para a cultura, por disponibilizar mais água no solo e garantir níveis evapotranspirativos mais altos, como é agronomicamente desejável.The daily water consumption of a wheat culture was quantified on an Oxisoil using atmospheric and soil data measured automatically on an experimental farm in Ponta Grossa, Paraná, Brazil. The measurement period was August through December 2003. The rain contribution to soil moisture and the vertical upward movement of water within the soil were particularly emphasized. Our results show that in the evaluated period (a wheat evapotranspiration amounted to 6.75 mm a day, to which upward water flux contributed with 62

  4. Skeletal parallel programming

    Saez, Fernando; Printista, Alicia Marcela; Piccoli, María Fabiana


    In the last time the high-performance programming community has worked to look for new templates or skeletons for several parallel programming paradigms. This new form of programming allows to programmer to reduce the time of development, since it saves time in the phase of design, testing and codification. We are concerned in some issues of skeletons that are fundamental to the definition of any skeletal parallel programming system. This paper present commentaries about these issues in the c...

  5. Introduction to parallel computing


    Introduction to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards. It is the only book to have complete coverage of traditional Computer Science algorithms (sorting, graph and matrix algorithms), scientific computing algorithms (FFT, sparse matrix computations, N-body methods), and data intensive algorithms (search, dynamic programming, data-mining).

  6. Parallelization by Simulated Tunneling

    Waterland, Amos; Appavoo, Jonathan; Seltzer, Margo I.


    As highly parallel heterogeneous computers become commonplace, automatic parallelization of software is an increasingly critical unsolved problem. Continued progress on this problem will require large quantities of information about the runtime structure of sequential programs to be stored and reasoned about. Manually formalizing all this information through traditional approaches, which rely on semantic analysis at the language or instruction level, has historically proved challenging. We ta...

  7. The Parallel C Preprocessor

    Eugene D. Brooks III; Gorda, Brent C.; Karen H. Warren


    We describe a parallel extension of the C programming language designed for multiprocessors that provide a facility for sharing memory between processors. The programming model was initially developed on conventional shared memory machines with small processor counts such as the Sequent Balance and Alliant FX/8, but has more recently been used on a scalable massively parallel machine, the BBN TC2000. The programming model is split-join rather than fork-join. Concurrency is exploited to use a ...

  8. Hetrogenous Parallel Computing

    Feng, Wu


    With processor core counts doubling every 18-24 months and penetrating all markets from high-end servers in supercomputers to desktops and laptops down to even mobile phones, we sit at the dawn of a world of ubiquitous parallelism, one where extracting performance via parallelism is paramount. That is, the "free lunch" to better performance, where programmers could rely on substantial increases in single-threaded performance to improve software, is over. The burden falls on developers to expl...

  9. Combining parallel search and parallel consistency in constraint programming

    Rolf, Carl Christian; Kuchcinski, Krzysztof


    Program parallelization becomes increasingly important when new multi-core architectures provide ways to improve performance. One of the greatest challenges of this development lies in programming parallel applications. Declarative languages, such as constraint programming, can make the transition to parallelism easier by hiding the parallelization details in a framework. Automatic parallelization in constraint programming has mostly focused on parallel search. While search and consist...

  10. Continuous parallel coordinates.

    Heinrich, Julian; Weiskopf, Daniel


    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  11. Improvement of Test Automation

    Räsänen, Timo


    The purpose for this study was to find out how to ensure that the automated testing of MME in the Network Verification will continue smooth and reliable while using the in-house developed test automation framework. The goal of this thesis was to reveal the reasons of the currently challenging situation and to find the key elements to be improved in the MME testing carried by the test automation. Also a reason for the study was to get solutions as to how to change the current procedures and wa...

  12. Chef infrastructure automation cookbook

    Marschall, Matthias


    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  13. A centralized global automation group in a decentralized organization

    Jeffrey Veitch; Judy Hinderliter-Smith; James Ormand; Jimmy Bruner; Larry Birkemo


    In the latter part of the 1990s, many companies have worked to foster a ‘matrix’ style culture through several changes in organizational structure. This type of culture facilitates communication and development of new technology across organizational and global boundaries. At Glaxo Wellcome, this matrix culture is reflected in an automation strategy that relies on both centralized and decentralized resources. The Group Development Operations Information Systems Robotics Team is a centralized ...

  14. A New Era for Cytogenetics Laboratories: Automated Specimen Preparation

    Shaunnessey, M.S.; Martin, A.O.; Sabrin, H.W.; Cimino, M.C.; Rissman, A


    The current capacity of clinical cytogenetics laboratories is limited by the labor intensiveness of the process. Specimen preparation for analysis consists of several steps: culture initiation, culture “harvest” (transfer of cells in culture to microscope slides), and staining. Steps in the analysis include cell location and selection, counting, and examination of chromosomes. In this report we will present preliminary results of evaluations and development of a Computer Automated Specimen Pr...

  15. Automated Vehicles Symposium 2014

    Beiker, Sven; Road Vehicle Automation 2


    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  16. I-94 Automation FAQs

    Department of Homeland Security — In order to increase efficiency, reduce operating costs and streamline the admissions process, U.S. Customs and Border Protection has automated Form I-94 at air and...

  17. Automated Vehicles Symposium 2015

    Beiker, Sven


    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  18. Hydrometeorological Automated Data System

    National Oceanic and Atmospheric Administration, Department of Commerce — The Office of Hydrologic Development of the National Weather Service operates HADS, the Hydrometeorological Automated Data System. This data set contains the last...

  19. An automated Certification Authority

    Shamardin, L V


    This note describe an approach to building an automated Certification Authority. It is compatible with basic requirements of RFC2527. It also supports Registration Authorities and Globus Toolkit grid-cert-renew automatic certificate renewal.

  20. Disassembly automation automated systems with cognitive abilities

    Vongbunyong, Supachai


    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  1. Automated security management

    Al-Shaer, Ehab; Xie, Geoffrey


    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  2. Automating Supplier Selection Procedures

    Davidrajuh, Reggie


    This dissertation describes a methodology, tools, and implementation techniques of automating supplier selection procedures of a small and medium-sized agile virtual enterprise. Firstly, a modeling approach is devised that can be used to model the supplier selection procedures of an enterprise. This modeling approach divides the supplier selection procedures broadly into three stages, the pre-selection, selection, and post-selection stages. Secondly, a methodology is presented for automating ...

  3. Taiwan Automated Telescope Network

    Shuhrat Ehgamberdiev; Alexander Serebryanskiy; Antonio Jimenez; Li-Han Wang; Ming-Tsung Sun; Javier Fernandez Fernandez; Dean-Yi Chou


    A global network of small automated telescopes, the Taiwan Automated Telescope (TAT) network, dedicated to photometric measurements of stellar pulsations, is under construction. Two telescopes have been installed in Teide Observatory, Tenerife, Spain and Maidanak Observatory, Uzbekistan. The third telescope will be installed at Mauna Loa Observatory, Hawaii, USA. Each system uses a 9-cm Maksutov-type telescope. The effective focal length is 225 cm, corresponding to an f-ratio of 25. The field...

  4. Automated Lattice Perturbation Theory

    Monahan, Christopher


    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  5. Automated functional software testing

    Jelnikar, Kristina


    The following work describes an approach to software test automation of functional testing. In the introductory part we are introducing what testing problems development companies are facing. The second chapter describes some testing methods, what role does testing have in software development, some approaches to software development and the meaning of testing environment. Chapter 3 is all about test automation. After a brief historical presentation, we are demonstrating through s...

  6. Instant Sikuli test automation

    Lau, Ben


    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A concise guide written in an easy-to follow style using the Starter guide approach.This book is aimed at automation and testing professionals who want to use Sikuli to automate GUI. Some Python programming experience is assumed.

  7. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes

  8. Automated macromolecular crystal detection system and method

    Christian, Allen T.; Segelke, Brent; Rupp, Bernard; Toppani, Dominique


    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  9. Parallel Magnetic Resonance Imaging

    Uecker, Martin


    The main disadvantage of Magnetic Resonance Imaging (MRI) are its long scan times and, in consequence, its sensitivity to motion. Exploiting the complementary information from multiple receive coils, parallel imaging is able to recover images from under-sampled k-space data and to accelerate the measurement. Because parallel magnetic resonance imaging can be used to accelerate basically any imaging sequence it has many important applications. Parallel imaging brought a fundamental shift in image reconstruction: Image reconstruction changed from a simple direct Fourier transform to the solution of an ill-conditioned inverse problem. This work gives an overview of image reconstruction from the perspective of inverse problems. After introducing basic concepts such as regularization, discretization, and iterative reconstruction, advanced topics are discussed including algorithms for auto-calibration, the connection to approximation theory, and the combination with compressed sensing.

  10. SPINning parallel systems software

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  11. The NAS Parallel Benchmarks

    Bailey, David H.


    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  12. Parallel programming with Python

    Palach, Jan


    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  13. Highly parallel computation

    Denning, Peter J.; Tichy, Walter F.


    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.


    Zafer DEMİR


    Full Text Available In this study, at first, Parallel Virtual Machine is reviewed. Since It is based upon parallel processing, it is similar to parallel systems in principle in terms of architecture. Parallel Virtual Machine is neither an operating system nor a programming language. It is a specific software tool that supports heterogeneous parallel systems. However, it takes advantage of the features of both to make users close to parallel systems. Since tasks can be executed in parallel on parallel systems by Parallel Virtual Machine, there is an important similarity between PVM and distributed systems and multiple processors. In this study, the relations in question are examined by making use of Master-Slave programming technique. In conclusion, the PVM is tested with a simple factorial computation on a distributed system to observe its adaptation to parallel architects.

  15. A Comparative Study on Serial and Parallel Web Content Mining

    Binayak Panda


    Full Text Available World Wide Web (WWW is such a repository which serves every individuals need starting with the context of education to entertainment etc. But from users point of view getting relevant information with respect to one particular context is time consuming and also not so easy. It is because of the volume of data which is unstructured, distributed and dynamic in nature. There can be automation to extract relevant information with respect to one particular context, which is named as Web Content Mining. The efficiency of automation depends on validity of expected outcome as well as amount of processing time. The acceptability of outcome depends on user or user’s policy. But the amount of processing time depends on the methodology of Web Content Mining. In this work a study has been carried out between Serial Web Content Mining and Parallel Web Content Mining. This work also focuses on the frame work of implementation of parallelism in Web Content Mining.

  16. A MapReduce based Parallel SVM for Email Classification

    Ke Xu


    Full Text Available Support Vector Machine (SVM is a powerful classification and regression tool. Varying approaches including SVM based techniques are proposed for email classification. Automated email classification according to messages or user-specific folders and information extraction from chronologically ordered email streams have become interesting areas in text machine learning research. This paper presents a parallel SVM based on MapReduce (PSMR algorithm for email classification. We discuss the challenges that arise from differences between email foldering and traditional document classification. We show experimental results from an array of automated classification methods and evaluation methodologies, including Naive Bayes, SVM and PSMR method of foldering results on the Enron datasets based on the timeline. By distributing, processing and optimizing the subsets of the training data across multiple participating nodes, the parallel SVM based on MapReduce algorithm reduces the training time significantly

  17. Parallel programming with PCN

    Foster, I.; Tuecke, S.


    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  18. Optical parallel selectionist systems

    Caulfield, H. John


    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  19. Parallel plate detectors

    A 5x3cm2 (timing only) and a 15x5cm2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters

  20. Parallel hierarchical radiosity rendering

    Carter, M.


    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  1. Applied parallel computing

    Deng, Yuefan


    The book provides a practical guide to computational scientists and engineers to help advance their research by exploiting the superpower of supercomputers with many processors and complex networks. This book focuses on the design and analysis of basic parallel algorithms, the key components for composing larger packages for a wide range of applications.

  2. Parallel hierarchical global illumination

    Snell, Q.O.


    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  3. Practical parallel programming

    Bauer, Barr E


    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  4. Parallel universes beguile science


    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  5. NAS Parallel Benchmarks Results

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)


    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  6. Optimizing parallel reduction operations

    Denton, S.M.


    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  7. Economical parallel oligonucleotide and peptide synthesizer - PET OLIGATOR

    Lebl, M.; Pistek, Ch.; Hachmann, J.; Mudra, Petr; Pešek, Václav; Pokorný, Vít; Poncar, Pavel; Ženíšek, Karel


    Roč. 13, 1/2 (2007), s. 367-375. ISSN 1573-3149 Grant ostatní: NIH SBIR(US) R43 GM61511-01; NIH SBIR(US) R43 GM58981-01 Institutional research plan: CEZ:AV0Z40550506 Keywords : automated synthesizer * centrifugation * parallel synthesis Subject RIV: CC - Organic Chemistry Impact factor: 0.971, year: 2007

  8. A MapReduce based Parallel SVM for Email Classification

    Ke Xu; Cui Wen; Qiong Yuan; Xiangzhu He; Jun Tie


    Support Vector Machine (SVM) is a powerful classification and regression tool. Varying approaches including SVM based techniques are proposed for email classification. Automated email classification according to messages or user-specific folders and information extraction from chronologically ordered email streams have become interesting areas in text machine learning research. This paper presents a parallel SVM based on MapReduce (PSMR) algorithm for email classification. We discuss the chal...

  9. Trapping Parallel Port to Operate 220V Appliances

    Prateek Sharma; Kapil Kumar; Ajay Kumar Singh


    With advancement of technology things arebecoming simpler and easier for us. Automation is the use ofTechnology to reduce Human work. Automatic systems are beingpreferred over manual system. Internet controlling offers a newapproach to control electric appliances from a remote terminal,using the Internet, Bluetooth and Local Area Networkconnection. This system is accomplished by personal computers,parallel port, local area network connection, internet connection,mobile phone and Bluetooth dev...

  10. Automated Camera Calibration

    Chen, Siqi; Cheng, Yang; Willson, Reg


    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  11. Automated telescope scheduling

    Johnston, Mark D.


    With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.

  12. To Parallelize or Not to Parallelize, Speed Up Issue

    Alaa Ismail El-Nashar


    Full Text Available Running parallel applications requires special and expensive processing resources to obtain the requiredresults within a reasonable time. Before parallelizing serial applications, some analysis is recommendedto be carried out to decide whether it will benefit from parallelization or not. In this paper we discuss theissue of speed up gained from parallelization using Message Passing Interface (MPI to compromisebetween the overhead of parallelization cost and the gained parallel speed up. We also propose anexperimental method to predict the speed up of MPI applications.

  13. Cultural Resources, RecreationAreasESRI-This data set represents the recreational areas found in Utah, including campgrounds, golf courses and ski resorts., Published in 2001, Smaller than 1:100000 scale, State of Utah Automated Geographic Reference Center.

    NSGIC GIS Inventory (aka Ramona) — This Cultural Resources dataset, published at Smaller than 1:100000 scale, was produced all or in part from Published Reports/Deeds information as of 2001. It is...

  14. Myths in test automation

    Jazmine Francis


    Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear script...

  15. Automated phantom assay system

    This paper describes an automated phantom assay system developed for assaying phantoms spiked with minute quantities of radionuclides. The system includes a computer-controlled linear-translation table that positions the phantom at exact distances from a spectrometer. A multichannel analyzer (MCA) interfaces with a computer to collect gamma spectral data. Signals transmitted between the controller and MCA synchronize data collection and phantom positioning. Measured data are then stored on disk for subsequent analysis. The automated system allows continuous unattended operation and ensures reproducible results

  16. Parallel multilevel preconditioners

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.


    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  17. Parallel clustering with CFinder

    Pollner, Peter; Vicsek, Tamas; 10.1142/S0129626412400014


    The amount of available data about complex systems is increasing every year, measurements of larger and larger systems are collected and recorded. A natural representation of such data is given by networks, whose size is following the size of the original system. The current trend of multiple cores in computing infrastructures call for a parallel reimplementation of earlier methods. Here we present the grid version of CFinder, which can locate overlapping communities in directed, weighted or undirected networks based on the clique percolation method (CPM). We show that the computation of the communities can be distributed among several CPU-s or computers. Although switching to the parallel version not necessarily leads to gain in computing time, it definitely makes the community structure of extremely large networks accessible.


    Florian Ion Tiberius Petrescu


    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  19. Parallel grid population

    Wald, Ingo; Ize, Santiago


    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  20. Ultrascalable petaflop parallel supercomputer

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd


    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  1. TAO par syntaxeur parallele

    Diaz, C.


    In remote control, the master element which the user operates looks for practical and historical reasons like the slave arm and therefore features a series architecture, with a few drawbacks in terms of mass, dimensions, rigidity and mechanical complexity. To remedy these defects, we are now introducing a new master element with parallel kinematics. This syntactor, derived from Steward's manipulators, has six degrees of freedom and comprises six motor-driven links arranged on a fixed plate (t...

  2. Stability of parallel flows

    Betchov, R


    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  3. Xyce parallel electronic simulator.

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.


    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  4. Parallel computing techniques

    Nakano, Junji


    Parallel computing means to divide a job into several tasks and use more than one processor simultaneously to perform these tasks. Assume you have developed a new estimation method for the parameters of a complicated statistical model. After you prove the asymptotic characteristics of the method (for instance, asymptotic distribution of the estimator), you wish to perform many simulations to assure the goodness of the method for reasonable numbers of data values and for different values of pa...

  5. Parallel Computation Is ESS

    Mondal, Nabarun; Ghosh, Partha P.


    There are enormous amount of examples of Computation in nature, exemplified across multiple species in biology. One crucial aim for these computations across all life forms their ability to learn and thereby increase the chance of their survival. In the current paper a formal definition of autonomous learning is proposed. From that definition we establish a Turing Machine model for learning, where rule tables can be added or deleted, but can not be modified. Sequential and parallel implementa...

  6. Algorithmically specialized parallel computers

    Snyder, Lawrence; Gannon, Dennis B


    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  7. Controlled Fuzzy Parallel Rewriting

    Asveld, Peter R.J.


    We study a Lindenmayer-like parallel rewriting system to model the growth of filaments (arrays of cells) in which developmental errors may occur. In essence this model is the fuzzy analogue of the derivation-controlled iteration grammar. Under minor assumptions on the family of control languages and on the family of fuzzy languages in the underlying iteration grammar, we show that (i) regular control does not provide additional generating power to the model, (ii) the number of fuzzy substitut...

  8. Controlled Fuzzy Parallel Rewriting

    Asveld, Peter R.J.; Paun, G.; Salomaa, A


    We study a Lindenmayer-like parallel rewriting system to model the growth of filaments (arrays of cells) in which developmental errors may occur. In essence this model is the fuzzy analogue of the derivation-controlled iteration grammar. Under minor assumptions on the family of control languages and on the family of fuzzy languages in the underlying iteration grammar, we show (i) regular control does not provide additional generating power to the model, (ii) the number of fuzzy substitutions ...

  9. Parallel programming with MPI

    MPI is a practical, portable, efficient and flexible standard for message passing, which has been implemented on most MPPs and network of workstations by machine vendors, universities and national laboratories. MPI avoids specifying how operations will take place and superfluous work to achieve efficiency as well as portability, and is also designed to encourage overlapping communication and computation to hide communication latencies. This presentation briefly explains the MPI standard, and comments on efficient parallel programming to improve performance. (author)

  10. Automated conflict resolution issues

    Wike, Jeffrey S.


    A discussion is presented of how conflicts for Space Network resources should be resolved in the ATDRSS era. The following topics are presented: a description of how resource conflicts are currently resolved; a description of issues associated with automated conflict resolution; present conflict resolution strategies; and topics for further discussion.

  11. Protokoller til Home Automation

    Kjær, Kristian Ellebæk


    computer, der kan skifte mellem foruddefinerede indstillinger. Nogle gange kan computeren fjernstyres over internettet, så man kan se hjemmets status fra en computer eller måske endda fra en mobiltelefon. Mens nævnte anvendelser er klassiske indenfor home automation, er yderligere funktionalitet dukket op...

  12. Myths in test automation

    Jazmine Francis


    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  13. Automated data model evaluation

    Modeling process is essential phase within information systems development and implementation. This paper presents methods and techniques for analysis and evaluation of data model correctness. Recent methodologies and development results regarding automation of the process of model correctness analysis and relations with ontology tools has been presented. Key words: Database modeling, Data model correctness, Evaluation

  14. Automated solvent concentrator

    Griffith, J. S.; Stuart, J. L.


    Designed for automated drug identification system (AUDRI), device increases concentration by 100. Sample is first filtered, removing particulate contaminants and reducing water content of sample. Sample is extracted from filtered residue by specific solvent. Concentrator provides input material to analysis subsystem.


    Dolgorukov, S. O.; National Aviation University; Roman, B. V.; National Aviation University


    The article reflects current situation in education regarding mechatronics learning difficulties. Com-plex of laboratory test benches on electropneumatic automation are considered as a tool in advancing through technical science. Course of laboratory works developed to meet the requirement of efficient and reliable way of practical skills acquisition is regarded the simplest way for students to learn the ba-sics of mechatronics.

  16. Automating spectral measurements

    Goldstein, Fred T.


    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  17. El general de brigada es um tipo de caramelo – tradução automática e aprendizagem cultural.
    DOI: 10.5007/2175-7968.2011v1n27p243

    Nylcea Thereza de Siqueira Pedra; Ruth Bohunovsky


    O presente artigo tem por objetivo averiguar o papel assumido pela tradução (automática) no ensino/aprendizagem de línguas estrangeiras. Para tanto, discute-se, primeiramente, diferentes funções didáticas atribuídas à tradução. Entre elas, destaca-se a que objetiva a “aprendizagem cultural”, visando à sensibilização dos aprendizes para aspectos culturais relacionados à língua. Recorrendo a alguns teóricos, problematiza-se o uso de conceitos como “cultura”, “língua” e “tradução” e encontra-se ...

  18. Parallel Programming with Declarative Ada

    Thornley, John


    Declarative programming languages (e.g., functional and logic programming languages) are semantically elegant and implicitly express parallelism at a high level. We show how a parallel declarative language can be based on a modern structured imperative language with single-assignment variables. Such a language combines the advantages of parallel declarative programming with the strengths and familiarity of the underlying imperative language. We introduce Declarative Ada, a parallel declarativ...

  19. Integrating Task and Data Parallelism

    Massingill, Berna


    Many models of concurrency and concurrent programming have been proposed; most can be categorized as either task-parallel (based on functional decomposition) or data-parallel (based on data decomposition). Task-parallel models are most effective for expressing irregular computations; data-parallel models are most effective for expressing regular computations. Some computations, however, exhibit both regular and irregular aspects. For such computations, a better programming model is one that i...

  20. Parallel processing approaches in robotics

    Henrich, Dominik; Höniger, Thomas


    This paper presents the different possibilities for parallel processing in robot control architectures. At the beginning, we shortly review the historic development of control architectures. Then, a list of requirements for control architectures is set up from a parallel processing point of view. As our main topic, we identify the levels of parallel processing in robot control architectures. With each level of parallelism, examples for a typical robot control architecture are presented. Final...

  1. Parallel Repetition From Fortification

    Moshkovitz Aaronson, Dana Hadar


    The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games – “fortification” – and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short. As corollaries, we obtain: (1) Starting from...

  2. Matlab in Parallel

    Jakl, Ondřej; Musil, Tomáš

    Ostrava: VŠB-TU Ostrava, 2007 - (Doležalová, J.), s. 100-104 ISBN 978-80-248-1649-4. [Moderní matematické metody v inženýrství. Dolní Lomná (CZ), 04.06.2007-06.06.2007] R&D Projects: GA AV ČR IBS3086102 Institutional research plan: CEZ:AV0Z30860518; CEZ:AV0Z20760514 Keywords : Matlab * parallel processing * nonlinear dynamics of rotors Subject RIV: IN - Informatics, Computer Science

  3. Object-Oriented Parallel Programming

    Givelberg, Edward


    We introduce an object-oriented framework for parallel programming, which is based on the observation that programming objects can be naturally interpreted as processes. A parallel program consists of a collection of persistent processes that communicate by executing remote methods. We discuss code parallelization and process persistence, and explain the main ideas in the context of computations with very large data objects.

  4. 21 CFR 866.2170 - Automated colony counter.


    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated colony counter. 866.2170 Section 866.2170 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... purposes to determine the number of bacterial colonies present on a bacteriological culture...

  5. 21 CFR 866.2850 - Automated zone reader.


    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated zone reader. 866.2850 Section 866.2850 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... surface of certain culture media used in disc-agar diffusion antimicrobial susceptibility tests....

  6. Effective Manufacturing Method for Automated Inside Diameter Grinding

    Slowinski, Bronislaw; Nadolny, Krzysztof

    This paper presents essence and results of experimental investigations of highly efficient automated internal cylindrical grinding method. The essence of this method consists in the removal of the whole grinding allowance in one pass of a grinding wheel, parallel to preserving the required quality of the surface layer of a workpiece. A grinding wheel applied to the developed method had a zonal diversified internal structure and a properly prepared conical chamfer.

  7. Tolerant (parallel) Programming

    DiNucci, David C.; Bailey, David H. (Technical Monitor)


    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  8. Theory of Parallel Mechanisms

    Huang, Zhen; Ding, Huafeng


    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  9. Massively Parallel QCD

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G


    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  10. A Parallel Butterfly Algorithm

    Poulson, Jack


    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.