WorldWideScience

Sample records for automated parallel cultures

  1. Fully automated parallel oligonucleotide synthesizer

    Czech Academy of Sciences Publication Activity Database

    Lebl, M.; Burger, Ch.; Ellman, B.; Heiner, D.; Ibrahim, G.; Jones, A.; Nibbe, M.; Thompson, J.; Mudra, Petr; Pokorný, Vít; Poncar, Pavel; Ženíšek, Karel

    2001-01-01

    Roč. 66, č. 8 (2001), s. 1299-1314. ISSN 0010-0765 Institutional research plan: CEZ:AV0Z4055905 Keywords : automated oligonucleotide synthesizer Subject RIV: CC - Organic Chemistry Impact factor: 0.778, year: 2001

  2. Parallel symbolic execution for automated real-world software testing

    OpenAIRE

    Bucur, Stefan; Ureche, Vlad; Zamfir, Cristian; Candea, George

    2011-01-01

    This paper introduces Cloud9, a platform for automated testing of real-world software. Our main contribution is the scalable parallelization of symbolic execution on clusters of commodity hardware, to help cope with path explosion. Cloud9 provides a systematic interface for writing "symbolic tests" that concisely specify entire families of inputs and behaviors to be tested, thus improving testing productivity. Cloud9 can handle not only single-threaded programs but also multi-threaded and dis...

  3. Automated Enhanced Parallelization of Sequential C to Parallel OpenMP

    Directory of Open Access Journals (Sweden)

    Dheeraj D., Shruti Ramesh, Nitish B.

    2012-08-01

    Full Text Available The paper presents the work towards implementation of a technique to enhance parallel execution of auto-generated OpenMP programs by considering the architecture of on-chip cache memory, thereby achieving higher performance. It avoids false-sharing in 'for-loops' by generating OpenMP code for dynamically scheduling chunks by placing each core’s data cache line size apart. It has been found that most of the parallelization tools do not deal with significant issues associated with multicore such as false-sharing, which can degrade performance. An open-source parallelization tool called Par4All (Parallel for All, which internally makes use of PIPS (Parallelization Infrastructure for Parallel Systems - PoCC (Polyhedral Compiler Collection integration has been analyzed and its power has been unleashed to achieve maximum hardware utilization. The work is focused only on optimizing parallelization of for-loops, since loops are the most time consuming parts of code. The performance of the generated OpenMP programs have been analyzed on different architectures using Intel® VTune™ Performance Analyzer. Some of the computationally intensive programs from PolyBench have been tested with different data sets and the results obtained reveal that the OpenMP codes generated by the enhanced technique have resulted in considerable speedup. The deliverables include automation tool, test cases, corresponding OpenMP programs and performance analysis reports.

  4. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Long, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris

    2000-01-01

    Parallelized versions of genetic algorithms (GAs) are popular primarily for three reasons: the GA is an inherently parallel algorithm, typical GA applications are very compute intensive, and powerful computing platforms, especially Beowulf-style computing clusters, are becoming more affordable and easier to implement. In addition, the low communication bandwidth required allows the use of inexpensive networking hardware such as standard office ethernet. In this paper we describe a parallel GA and its use in automated high-level circuit design. Genetic algorithms are a type of trial-and-error search technique that are guided by principles of Darwinian evolution. Just as the genetic material of two living organisms can intermix to produce offspring that are better adapted to their environment, GAs expose genetic material, frequently strings of 1s and Os, to the forces of artificial evolution: selection, mutation, recombination, etc. GAs start with a pool of randomly-generated candidate solutions which are then tested and scored with respect to their utility. Solutions are then bred by probabilistically selecting high quality parents and recombining their genetic representations to produce offspring solutions. Offspring are typically subjected to a small amount of random mutation. After a pool of offspring is produced, this process iterates until a satisfactory solution is found or an iteration limit is reached. Genetic algorithms have been applied to a wide variety of problems in many fields, including chemistry, biology, and many engineering disciplines. There are many styles of parallelism used in implementing parallel GAs. One such method is called the master-slave or processor farm approach. In this technique, slave nodes are used solely to compute fitness evaluations (the most time consuming part). The master processor collects fitness scores from the nodes and performs the genetic operators (selection, reproduction, variation, etc.). Because of dependency

  5. Automating the selection of standard parallels for conic map projections

    Science.gov (United States)

    Šavriǒ, Bojan; Jenny, Bernhard

    2016-05-01

    Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.

  6. Automating the parallel processing of fluid and structural dynamics calculations

    Science.gov (United States)

    Arpasi, Dale J.; Cole, Gary L.

    1987-01-01

    The NASA Lewis Research Center is actively involved in the development of expert system technology to assist users in applying parallel processing to computational fluid and structural dynamic analysis. The goal of this effort is to eliminate the necessity for the physical scientist to become a computer scientist in order to effectively use the computer as a research tool. Programming and operating software utilities have previously been developed to solve systems of ordinary nonlinear differential equations on parallel scalar processors. Current efforts are aimed at extending these capabilties to systems of partial differential equations, that describe the complex behavior of fluids and structures within aerospace propulsion systems. This paper presents some important considerations in the redesign, in particular, the need for algorithms and software utilities that can automatically identify data flow patterns in the application program and partition and allocate calculations to the parallel processors. A library-oriented multiprocessing concept for integrating the hardware and software functions is described.

  7. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    Science.gov (United States)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  8. Automation in laser scanning for cultural heritage applications

    OpenAIRE

    Böhm, Jan; Haala, Norbert; Alshawabkeh, Yahya

    2005-01-01

    Within the paper we present the current activities of the Institute for Photogrammetry in cultural heritage documentation in Jordan. In particular two sites, Petra and Jerash, were recorded using terrestrial laser scanning (TLS). We present the results and the current status of the recording. Experiences drawn from these projects have led us to investigate more automated approaches to TLS data processing. We detail two approaches within this work. The automation of georeferencing for TLS data...

  9. Saturated Feedback Control for an Automated Parallel Parking Assist System

    OpenAIRE

    Petrov, Plamen; Nashashibi, Fawzi

    2014-01-01

    This paper considers the parallel parking problem of automatic front-wheel steering vehicles. The problem of stabilizing the vehicle at desired position and orientation is seen as an extension of the tracking problem. A saturated control is proposed which achieves quick steering of the system near the desired position of the parking spot with desired orientation and can be successfully used in solving parking problems. In addition, in order to obtain larger area of the starting positions of t...

  10. Automated harvesting and 2-step purification of unclarified mammalian cell-culture broths containing antibodies.

    Science.gov (United States)

    Holenstein, Fabian; Eriksson, Christer; Erlandsson, Ioana; Norrman, Nils; Simon, Jill; Danielsson, Åke; Milicov, Adriana; Schindler, Patrick; Schlaeppi, Jean-Marc

    2015-10-30

    Therapeutic monoclonal antibodies represent one of the fastest growing segments in the pharmaceutical market. The growth of the segment has necessitated development of new efficient and cost saving platforms for the preparation and analysis of early candidates for faster and better antibody selection and characterization. We report on a new integrated platform for automated harvesting of whole unclarified cell-culture broths, followed by in-line tandem affinity-capture, pH neutralization and size-exclusion chromatography of recombinant antibodies expressed transiently in mammalian human embryonic kidney 293T-cells at the 1-L scale. The system consists of two bench-top chromatography instruments connected to a central unit with eight disposable filtration devices used for loading and filtering the cell cultures. The staggered parallel multi-step configuration of the system allows unattended processing of eight samples in less than 24h. The system was validated with a random panel of 45 whole-cell culture broths containing recombinant antibodies in the early profiling phase. The results showed that the overall performances of the preparative automated system were higher compared to the conventional downstream process including manual harvesting and purification. The mean recovery of purified material from the culture-broth was 66.7%, representing a 20% increase compared to that of the manual process. Moreover, the automated process reduced by 3-fold the amount of residual aggregates in the purified antibody fractions, indicating that the automated system allows the cost-efficient and timely preparation of antibodies in the 20-200mg range, and covers the requirements for early in vitro and in vivo profiling and formulation of these drug candidates. PMID:26431859

  11. The Protein Maker: an automated system for high-throughput parallel purification

    International Nuclear Information System (INIS)

    The Protein Maker instrument addresses a critical bottleneck in structural genomics by allowing automated purification and buffer testing of multiple protein targets in parallel with a single instrument. Here, the use of this instrument to (i) purify multiple influenza-virus proteins in parallel for crystallization trials and (ii) identify optimal lysis-buffer conditions prior to large-scale protein purification is described. The Protein Maker is an automated purification system developed by Emerald BioSystems for high-throughput parallel purification of proteins and antibodies. This instrument allows multiple load, wash and elution buffers to be used in parallel along independent lines for up to 24 individual samples. To demonstrate its utility, its use in the purification of five recombinant PB2 C-terminal domains from various subtypes of the influenza A virus is described. Three of these constructs crystallized and one diffracted X-rays to sufficient resolution for structure determination and deposition in the Protein Data Bank. Methods for screening lysis buffers for a cytochrome P450 from a pathogenic fungus prior to upscaling expression and purification are also described. The Protein Maker has become a valuable asset within the Seattle Structural Genomics Center for Infectious Disease (SSGCID) and hence is a potentially valuable tool for a variety of high-throughput protein-purification applications

  12. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    OpenAIRE

    Hon Ming Yip; John C. S. Li; Kai Xie; Xin Cui; Agrim Prasad; Qiannan Gao; Chi Chiu Leung; Lam, Raymond H. W.

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet...

  13. Anthropology and cultural neuroscience: creating productive intersections in parallel fields.

    Science.gov (United States)

    Brown, R A; Seligman, R

    2009-01-01

    Partly due to the failure of anthropology to productively engage the fields of psychology and neuroscience, investigations in cultural neuroscience have occurred largely without the active involvement of anthropologists or anthropological theory. Dramatic advances in the tools and findings of social neuroscience have emerged in parallel with significant advances in anthropology that connect social and political-economic processes with fine-grained descriptions of individual experience and behavior. We describe four domains of inquiry that follow from these recent developments, and provide suggestions for intersections between anthropological tools - such as social theory, ethnography, and quantitative modeling of cultural models - and cultural neuroscience. These domains are: the sociocultural construction of emotion, status and dominance, the embodiment of social information, and the dual social and biological nature of ritual. Anthropology can help locate unique or interesting populations and phenomena for cultural neuroscience research. Anthropological tools can also help "drill down" to investigate key socialization processes accountable for cross-group differences. Furthermore, anthropological research points at meaningful underlying complexity in assumed relationships between social forces and biological outcomes. Finally, ethnographic knowledge of cultural content can aid with the development of ecologically relevant stimuli for use in experimental protocols. PMID:19874960

  14. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Science.gov (United States)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  15. Digital microfluidics for automated hanging drop cell spheroid culture.

    Science.gov (United States)

    Aijian, Andrew P; Garrell, Robin L

    2015-06-01

    Cell spheroids are multicellular aggregates, grown in vitro, that mimic the three-dimensional morphology of physiological tissues. Although there are numerous benefits to using spheroids in cell-based assays, the adoption of spheroids in routine biomedical research has been limited, in part, by the tedious workflow associated with spheroid formation and analysis. Here we describe a digital microfluidic platform that has been developed to automate liquid-handling protocols for the formation, maintenance, and analysis of multicellular spheroids in hanging drop culture. We show that droplets of liquid can be added to and extracted from through-holes, or "wells," and fabricated in the bottom plate of a digital microfluidic device, enabling the formation and assaying of hanging drops. Using this digital microfluidic platform, spheroids of mouse mesenchymal stem cells were formed and maintained in situ for 72 h, exhibiting good viability (>90%) and size uniformity (% coefficient of variation digital microfluidic platform provides a viable tool for automating cell spheroid culture and analysis. PMID:25510471

  16. DATA TRANSFER IN THE AUTOMATED SYSTEM OF PARALLEL DESIGN AND CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Volkov Andrey Anatol'evich

    2012-12-01

    Full Text Available This article covers data transfer processes in the automated system of parallel design and construction. The authors consider the structure of reports used by contractors and clients when large-scale projects are implemented. All necessary items of information are grouped into three levels, and each level is described by certain attributes. The authors drive a lot of attention to the integrated operational schedule as it is the main tool of project management. Some recommendations concerning the forms and the content of reports are presented. Integrated automation of all operations is a necessary condition for the successful implementation of the new concept. The technical aspect of the notion of parallel design and construction also includes the client-to-server infrastructure that brings together all process implemented by the parties involved into projects. This approach should be taken into consideration in the course of review of existing codes and standards to eliminate any inconsistency between the construction legislation and the practical experience of engineers involved into the process.

  17. Fully Automated Design of Super-High-Rise Building Structures by a Hybrid AI Model on a Massively Parallel Machine

    OpenAIRE

    Adeli, Hojjat; Park, H. S.

    1996-01-01

    This article presents an innovative research project (sponsored by the National Science Foundation, the American Iron and Steel Institute, and the American Institute of Steel Construction) where computationally elegant algorithms based on the integration of a novel connectionist computing model, mathematical optimization, and a massively parallel computer architecture are used to automate the complex process of engineering design.

  18. Establishment of automated culture system for murine induced pluripotent stem cells

    Directory of Open Access Journals (Sweden)

    Koike Hiroyuki

    2012-11-01

    Full Text Available Abstract Background Induced pluripotent stem (iPS cells can differentiate into any cell type, which makes them an attractive resource in fields such as regenerative medicine, drug screening, or in vitro toxicology. The most important prerequisite for these industrial applications is stable supply and uniform quality of iPS cells. Variation in quality largely results from differences in handling skills between operators in laboratories. To minimize these differences, establishment of an automated iPS cell culture system is necessary. Results We developed a standardized mouse iPS cell maintenance culture, using an automated cell culture system housed in a CO2 incubator commonly used in many laboratories. The iPS cells propagated in a chamber uniquely designed for automated culture and showed specific colony morphology, as for manual culture. A cell detachment device in the system passaged iPS cells automatically by dispersing colonies to single cells. In addition, iPS cells were passaged without any change in colony morphology or expression of undifferentiated stem cell markers during the 4 weeks of automated culture. Conclusions Our results show that use of this compact, automated cell culture system facilitates stable iPS cell culture without obvious effects on iPS cell pluripotency or colony-forming ability. The feasibility of iPS cell culture automation may greatly facilitate the use of this versatile cell source for a variety of biomedical applications.

  19. An Extended Case Study Methoology for Investigating Influence of Cultural, Organizational, and Automation Factors on Human-Automation Trust

    Science.gov (United States)

    Koltai, Kolina Sun; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Johnson, Walter; Cacanindin, Artemio

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Forces newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the cases politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerabilityhigh risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  20. Automated integration of genomic physical mapping data via parallel simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  1. Automated regenerable microarray-based immunoassay for rapid parallel quantification of mycotoxins in cereals.

    Science.gov (United States)

    Oswald, S; Karsunke, X Y Z; Dietrich, R; Märtlbauer, E; Niessner, R; Knopp, D

    2013-08-01

    An automated flow-through multi-mycotoxin immunoassay using the stand-alone Munich Chip Reader 3 platform and reusable biochips was developed and evaluated. This technology combines a unique microarray, prepared by covalent immobilization of target analytes or derivatives on diamino-poly(ethylene glycol) functionalized glass slides, with a dedicated chemiluminescence readout by a CCD camera. In a first stage, we aimed for the parallel detection of aflatoxins, ochratoxin A, deoxynivalenol, and fumonisins in cereal samples in a competitive indirect immunoassay format. The method combines sample extraction with methanol/water (80:20, v/v), extract filtration and dilution, and immunodetection using horseradish peroxidase-labeled anti-mouse IgG antibodies. The total analysis time, including extraction, extract dilution, measurement, and surface regeneration, was 19 min. The prepared microarray chip was reusable for at least 50 times. Oat extract revealed itself as a representative sample matrix for preparation of mycotoxin standards and determination of different types of cereals such as oat, wheat, rye, and maize polenta at relevant concentrations according to the European Commission regulation. The recovery rates of fortified samples in different matrices, with 55-80 and 58-79%, were lower for the better water-soluble fumonisin B1 and deoxynivalenol and with 127-132 and 82-120% higher for the more unpolar aflatoxins and ochratoxin A, respectively. Finally, the results of wheat samples which were naturally contaminated with deoxynivalenol were critically compared in an interlaboratory comparison with data obtained from microtiter plate ELISA, aokinmycontrol® method, and liquid chromatography-mass spectrometry and found to be in good agreement. PMID:23620369

  2. Parallel worlds : art and sport in contemporary culture

    OpenAIRE

    Tainio, Matti

    2015-01-01

    This research maps the relationships between art and sport through various perspectives using a multidisciplinary approach. In addition, three artistic projects have been included in the research. The research produces a reasoned proposition why art and sport should be seen similar practices in contemporary culture and why this perspective is beneficial. In everyday view art and sport seem opposite cultural practices, but by adopting an appropriate view similarities can be detected. In ord...

  3. Miniaturized Mass-Spectrometry-Based Analysis System for Fully Automated Examination of Conditioned Cell Culture Media

    NARCIS (Netherlands)

    Weber, E.; Pinkse, M.W.H.; Bener-Aksam, E.; Vellekoop, M.J.; Verhaert, P.D.E.M.

    2012-01-01

    We present a fully automated setup for performing in-line mass spectrometry (MS) analysis of conditioned media in cell cultures, in particular focusing on the peptides therein. The goal is to assess peptides secreted by cells in different culture conditions. The developed system is compatible with M

  4. Impact of Implementation of an Automated Liquid Culture System on Diagnosis of Tuberculous Pleurisy

    OpenAIRE

    Lee, Byung Hee; Yoon, Seong Hoon; Yeo, Hye Ju; Kim, Dong Wan; Lee, Seung Eun; Cho, Woo Hyun; Lee, Su Jin; Kim, Yun Seong; Jeon, Doosoo

    2015-01-01

    This study was conducted to evaluate the impact of implementation of an automated liquid culture system on the diagnosis of tuberculous pleurisy in an HIV-uninfected patient population. We retrospectively compared the culture yield, time to positivity, and contamination rate of pleural effusion samples in the BACTEC Mycobacteria Growth Indicator Tube 960 (MGIT) and Ogawa media among patients with tuberculous pleurisy. Out of 104 effusion samples, 43 (41.3%) were culture positive on either the...

  5. Automated Detection of Soma Location and Morphology in Neuronal Network Cultures

    OpenAIRE

    Burcin Ozcan; Pooran Negi; Fernanda Laezza; Manos Papadakis; Demetrio Labate

    2015-01-01

    Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS), where the extraction of multiple morphological features of neurons ...

  6. "Parallel Leadership in an "Unparallel" World"--Cultural Constraints on the Transferability of Western Educational Leadership Theories across Cultures

    Science.gov (United States)

    Goh, Jonathan Wee Pin

    2009-01-01

    With the global economy becoming more integrated, the issues of cross-cultural relevance and transferability of leadership theories and practices have become increasingly urgent. Drawing upon the concept of parallel leadership in schools proposed by Crowther, Kaagan, Ferguson, and Hann as an example, the purpose of this paper is to examine the…

  7. NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images

    OpenAIRE

    Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.

    2007-01-01

    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based ...

  8. Automated motion correction using parallel-strip registration for wide-field en face OCT angiogram

    Science.gov (United States)

    Zang, Pengxiao; Liu, Gangjun; Zhang, Miao; Dongye, Changlei; Wang, Jie; Pechauer, Alex D.; Hwang, Thomas S.; Wilson, David J.; Huang, David; Li, Dengwang

    2016-01-01

    We propose an innovative registration method to correct motion artifacts for wide-field optical coherence tomography angiography (OCTA) acquired by ultrahigh-speed swept-source OCT (>200 kHz A-scan rate). Considering that the number of A-scans along the fast axis is much higher than the number of positions along slow axis in the wide-field OCTA scan, a non-orthogonal scheme is introduced. Two en face angiograms in the vertical priority (2 y-fast) are divided into microsaccade-free parallel strips. A gross registration based on large vessels and a fine registration based on small vessels are sequentially applied to register parallel strips into a composite image. This technique is extended to automatically montage individual registered, motion-free angiograms into an ultrawide-field view. PMID:27446709

  9. Rapid automated classification of anesthetic depth levels using GPU based parallelization of neural networks.

    Science.gov (United States)

    Peker, Musa; Şen, Baha; Gürüler, Hüseyin

    2015-02-01

    The effect of anesthesia on the patient is referred to as depth of anesthesia. Rapid classification of appropriate depth level of anesthesia is a matter of great importance in surgical operations. Similarly, accelerating classification algorithms is important for the rapid solution of problems in the field of biomedical signal processing. However numerous, time-consuming mathematical operations are required when training and testing stages of the classification algorithms, especially in neural networks. In this study, to accelerate the process, parallel programming and computing platform (Nvidia CUDA) facilitates dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU) was utilized. The system was employed to detect anesthetic depth level on related electroencephalogram (EEG) data set. This dataset is rather complex and large. Moreover, the achieving more anesthetic levels with rapid response is critical in anesthesia. The proposed parallelization method yielded high accurate classification results in a faster time. PMID:25650073

  10. Two-dimensional parallel array technology as a new approach to automated combinatorial solid-phase organic synthesis

    Science.gov (United States)

    Brennan; Biddison; Frauendorf; Schwarcz; Keen; Ecker; Davis; Tinder; Swayze

    1998-01-01

    An automated, 96-well parallel array synthesizer for solid-phase organic synthesis has been designed and constructed. The instrument employs a unique reagent array delivery format, in which each reagent utilized has a dedicated plumbing system. An inert atmosphere is maintained during all phases of a synthesis, and temperature can be controlled via a thermal transfer plate which holds the injection molded reaction block. The reaction plate assembly slides in the X-axis direction, while eight nozzle blocks holding the reagent lines slide in the Y-axis direction, allowing for the extremely rapid delivery of any of 64 reagents to 96 wells. In addition, there are six banks of fixed nozzle blocks, which deliver the same reagent or solvent to eight wells at once, for a total of 72 possible reagents. The instrument is controlled by software which allows the straightforward programming of the synthesis of a larger number of compounds. This is accomplished by supplying a general synthetic procedure in the form of a command file, which calls upon certain reagents to be added to specific wells via lookup in a sequence file. The bottle position, flow rate, and concentration of each reagent is stored in a separate reagent table file. To demonstrate the utility of the parallel array synthesizer, a small combinatorial library of hydroxamic acids was prepared in high throughput mode for biological screening. Approximately 1300 compounds were prepared on a 10 μmole scale (3-5 mg) in a few weeks. The resulting crude compounds were generally >80% pure, and were utilized directly for high throughput screening in antibacterial assays. Several active wells were found, and the activity was verified by solution-phase synthesis of analytically pure material, indicating that the system described herein is an efficient means for the parallel synthesis of compounds for lead discovery. Copyright 1998 John Wiley & Sons, Inc. PMID:10099494

  11. Slavic and Kazakh Folklore Calendar: Typological and Ethno-Cultural Parallels

    Directory of Open Access Journals (Sweden)

    Galina Vlasova

    2016-04-01

    Full Text Available The study of multi-ethnic folk typology in the ethno-cultural region of Kazakhstan is of fundamental importance in the context of ethno-cultural typological parallels identifying the holiday calendar and rituals. The mechanism of folk typology is observed in ritual structures that compare the Slavic and Kazakh folklore calendars. There are typological parallels between all components in different Kazakhstan ethnic group celebrations: texts, rites, rituals, and cults. The Kazakh and Slavic calendar systems have a collective character as functional and are passed down from generation to generation. The entire annual cycle of Eurasian festivals is based on the collective existence principle. The Slavic holiday calendar represents a dual faith synthesis of pagan and Christian entities while the Kazakh holiday calendar focuses on the connection of the pagan and Muslim principles. Typologically, similar elements of Slavic and Kazakh holidays include structural relatedness, calendar confinement, similar archetypical rituals, and ceremonial models. Slavic and Kazakh ethnic and cultural contacts are reflected in the joint celebrations, in interethnic borrowing practices, rituals, games, and in Russian and Kazakh song performances by representatives of different ethnic groups. Field observations of Kazakh folklorists suggest the continuing existence of joint Nauryz and Shrovetide celebration traditions. The folklore situation in Kazakhstan demonstrates both the different stages of closely related culture innovation of the Eastern Slavs and the typological relationship and bilateral borrowing through contact with unrelated Turkic ethnic groups. The typological and ethno-cultural parallels as well as positive features of this holiday make it a universal phenomenon important for all members of a social or ethnic group.

  12. Influence of Cultural, Organizational, and Automation Capability on Human Automation Trust: A Case Study of Auto-GCAS Experimental Test Pilots

    Science.gov (United States)

    Koltai, Kolina; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Cacanindin, Artemio; Johnson, Walter; Lyons, Joseph

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Force's newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the case's politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerability/ high risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  13. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    Science.gov (United States)

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,

    2004-01-01

    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  14. Parallel expression of synaptophysin and evoked neurotransmitter release during development of cultured neurons

    DEFF Research Database (Denmark)

    Ehrhart-Bornstein, M; Treiman, M; Hansen, Gert Helge;

    1991-01-01

    Primary cultures of GABAergic cerebral cortex neurons and glutamatergic cerebellar granule cells were used to study the expression of synaptophysin, a synaptic vesicle marker protein, along with the ability of each cell type to release neurotransmitter upon stimulation. The synaptophysin expression...... and neurotransmitter release were measured in each of the culture types as a function of development for up to 8 days in vitro, using the same batch of cells for both sets of measurements to obtain optimal comparisons. The content and the distribution of synaptophysin in the developing cells were...... assessed by quantitative immunoblotting and light microscope immunocytochemistry, respectively. In both cell types, a close parallelism was found between the temporal pattern of development in synaptophysin expression and neurotransmitter release. This temporal pattern differed between the two types of...

  15. a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation

    Science.gov (United States)

    Kıvılcım, C. Ö.; Duran, Z.

    2016-06-01

    The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.

  16. Automated sample preparation in a microfluidic culture device for cellular metabolomics.

    Science.gov (United States)

    Filla, Laura A; Sanders, Katherine L; Filla, Robert T; Edwards, James L

    2016-06-21

    Sample pretreatment in conventional cellular metabolomics entails rigorous lysis and extraction steps which increase the duration as well as limit the consistency of these experiments. We report a biomimetic cell culture microfluidic device (MFD) which is coupled with an automated system for rapid, reproducible cell lysis using a combination of electrical and chemical mechanisms. In-channel microelectrodes were created using facile fabrication methods, enabling the application of electric fields up to 1000 V cm(-1). Using this platform, average lysing times were 7.12 s and 3.03 s for chips with no electric fields and electric fields above 200 V cm(-1), respectively. Overall, the electroporation MFDs yielded a ∼10-fold improvement in lysing time over standard chemical approaches. Detection of multiple intracellular nucleotides and energy metabolites in MFD lysates was demonstrated using two different MS platforms. This work will allow for the integrated culture, automated lysis, and metabolic analysis of cells in an MFD which doubles as a biomimetic model of the vasculature. PMID:27118418

  17. Evaluation of a Multi-Parameter Sensor for Automated, Continuous Cell Culture Monitoring in Bioreactors

    Science.gov (United States)

    Pappas, D.; Jeevarajan, A.; Anderson, M. M.

    2004-01-01

    Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments in microgravity. Measurement of cell culture medium allows for the optirn.jzation of culture conditions on orbit to maximize cell growth and minimize unnecessary exchange of medium. While several discrete sensors exist to measure culture health, a multi-parameter sensor would simplify the experimental apparatus. One such sensor, the Paratrend 7, consists of three optical fibers for measuring pH, dissolved oxygen (p02), dissolved carbon dioxide (pC02) , and a thermocouple to measure temperature. The sensor bundle was designed for intra-arterial placement in clinical patients, and potentially can be used in NASA's Space Shuttle and International Space Station biotechnology program bioreactors. Methods: A Paratrend 7 sensor was placed at the outlet of a rotating-wall perfused vessel bioreactor system inoculated with BHK-21 (baby hamster kidney) cells. Cell culture medium (GTSF-2, composed of 40% minimum essential medium, 60% L-15 Leibovitz medium) was manually measured using a bench top blood gas analyzer (BGA, Ciba-Corning). Results: A Paratrend 7 sensor was used over a long-term (>120 day) cell culture experiment. The sensor was able to track changes in cell medium pH, p02, and pC02 due to the consumption of nutrients by the BHK-21. When compared to manually obtained BGA measurements, the sensor had good agreement for pH, p02, and pC02 with bias [and precision] of 0.02 [0.15], 1 mm Hg [18 mm Hg], and -4.0 mm Hg [8.0 mm Hg] respectively. The Paratrend oxygen sensor was recalibrated (offset) periodically due to drift. The bias for the raw (no offset or recalibration) oxygen measurements was 42 mm Hg [38 mm Hg]. The measured response (rise) time of the sensor was 20 +/- 4s for pH, 81 +/- 53s for pC02, 51 +/- 20s for p02. For long-term cell culture measurements, these response times are more than adequate. Based on these findings , the Paratrend sensor could

  18. FY1995 study of low power LSI design automation software with parallel processing; 1995 nendo heiretsu shori wo katsuyoshita shodenryoku LSI muke sekkei jidoka software no kenkyu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The needs for low power LSIs have rapidly increased recently. For the low power LSI development, not only new circuit technologies but also new design automation tools supporting the new technologies are indispensable. The purpose of this project is to develop a new design automation software, which is able to design new digital LSIs with much lower power than that of conventional CMOS LSIs. A new design automation software for very low power LSIs has been developed targeting the pass-transistor logic SPL, a dedicated low power circuit technology. The software includes a logic synthesis function for pass-transistor-based macrocells and a macrocell placement function. Several new algorithms have been developed for the software, e.g. BDD construction. Some of them are designed and implemented for parallel processing in order to reduce the processing time. The logic synthesis function was tested on a set of benchmarks and finally applied to a low power CPU design. The designed 8-bit CPU was fully compatible with Zilog Z-80. The power dissipation of the CPU was compared with that of commercial CMOS Z-80. At most 82% of power of CMOS was reduced by the new CPU. On the other hand, parallel processing speed up was measured on the macrocell placement function. 34 folds speed up was realized. (NEDO)

  19. Automated detection of soma location and morphology in neuronal network cultures.

    Directory of Open Access Journals (Sweden)

    Burcin Ozcan

    Full Text Available Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS, where the extraction of multiple morphological features of neurons on large data sets is required. Existing algorithms are not very efficient when applied to the analysis of confocal image stacks of neuronal cultures. In addition to the usual difficulties associated with the processing of fluorescent images, these types of stacks contain a small number of images so that only a small number of pixels are available along the z-direction and it is challenging to apply conventional 3D filters. The algorithm we present in this paper applies a number of innovative ideas from the theory of directional multiscale representations and involves the following steps: (i image segmentation based on support vector machines with specially designed multiscale filters; (ii soma extraction and separation of contiguous somas, using a combination of level set method and directional multiscale filters. We also present an approach to extract the soma's surface morphology using the 3D shearlet transform. Extensive numerical experiments show that our algorithms are computationally efficient and highly accurate in segmenting the somas and separating contiguous ones. The algorithms presented in this paper will facilitate the development of a high-throughput quantitative platform for the study of neuronal networks for HCS applications.

  20. An Engineered Approach to Stem Cell Culture: Automating the Decision Process for Real-Time Adaptive Subculture of Stem Cells

    OpenAIRE

    Ker, Dai Fei Elmer; Weiss, Lee E.; Junkers, Silvina N.; Chen, Mei; Yin, Zhaozheng; Sandbothe, Michael F.; Huh, Seung-il; Eom, Sungeun; Bise, Ryoma; Osuna-Highley, Elvira; Kanade, Takeo; Campbell, Phil G.

    2011-01-01

    Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and obj...

  1. NeuronMetrics: Software for Semi-Automated Processing of Cultured-Neuron Images

    Science.gov (United States)

    Narro, Martha L.; Yang, Fan; Kraft, Robert; Wenk, Carola; Efrat, Alon; Restifo, Linda L.

    2007-01-01

    Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics™ for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch-number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of ~60 2D images is 1.0–2.5 hours, from a folder of images to a table of numeric data. NeuronMetrics’ output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery. PMID:17270152

  2. Identifying and Quantifying Cultural Factors That Matter to the IT Workforce: An Approach Based on Automated Content Analysis

    DEFF Research Database (Denmark)

    Schmiedel, Theresa; Müller, Oliver; Debortoli, Stefan;

    2016-01-01

    their reviews. Through a regression analysis on numerical employee satisfaction ratings, we find that a culture of learning and performance orientation contributes to employee motivation, while a culture of assertiveness and gender inegalitarianism has a strong negative influence on employees......Organizational culture represents a key success factor in highly competitive environments, such as, the IT sector. Thus, IT companies need to understand what makes up a culture that fosters employee performance. While existing research typically uses self-report questionnaires to study the relation...... study builds on 112,610 online reviews of Fortune 500 IT companies collected from Glassdoor, an online platform on which current and former employees can anonymously review companies and their management. We perform an automated content analysis to identify cultural factors that employees emphasize in...

  3. Cultural Heritage: An example of graphical documentation with automated photogrammetric systems

    Science.gov (United States)

    Giuliano, M. G.

    2014-06-01

    In the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used, in particular for the study and for the documentation of the ancient ruins. This work has been carried out during the PhD cycle that was produced the "Carta Archeologica del territorio intorno al monte Massico". The study suggests the archeological documentation of the mausoleum "Torre del Ballerino" placed in the south-west area of Falciano del Massico, along the Via Appia. The graphic documentation has been achieved by using photogrammetric system (Image Based Modeling) and by the classical survey with total station, Nikon Nivo C. The data acquisition was carried out through digital camera Canon EOS 5D Mark II with Canon EF 17-40 mm f/4L USM @ 20 mm with images snapped in RAW and corrected in Adobe Lightroom. During the data processing, the camera calibration and orientation was carried out by the software Agisoft Photoscans and the final result has allowed to achieve a scaled 3D model of the monument, imported in software MeshLab for the different view. Three orthophotos in jpg format were extracted by the model, and then were imported in AutoCAD obtaining façade's surveys.

  4. PetriJet Platform Technology: An Automated Platform for Culture Dish Handling and Monitoring of the Contents.

    Science.gov (United States)

    Vogel, Mathias; Boschke, Elke; Bley, Thomas; Lenk, Felix

    2015-08-01

    Due to the size of the required equipment, automated laboratory systems are often unavailable or impractical for use in small- and mid-sized laboratories. However, recent developments in automation engineering provide endless possibilities for incorporating benchtop devices. Here, the authors describe the development of a platform technology to handle sealed culture dishes. The programming is based on the Petri net method and implemented via Codesys V3.5 pbF. The authors developed a system of three independent electrical driven axes capable of handling sealed culture dishes. The device performs two difference processes. First, it automatically obtains an image of every processed culture dish. Second, a server-based image analysis algorithm provides the user with several parameters of the cultivated sample on the culture dish. For demonstration purposes, the authors developed a continuous, systematic, nondestructive, and quantitative method for monitoring the growth of a hairy root culture. New results can be displayed with respect to the previous images. This system is highly accurate, and the results can be used to simulate the growth of biological cultures. The authors believe that the innovative features of this platform can be implemented, for example, in the food industry, clinical environments, and research laboratories. PMID:25787804

  5. The Effect of Culture on the Sales Process Within a Global Company. Case Company ABB Oy Distribution Automation Sales Unit.

    OpenAIRE

    Kruger, Frantz

    2011-01-01

    My aim in this study was to investigate the possible differences between cultures when looking at them in the context of the sales process within a global company. If these differences did exist I would further attempt to prove that through careful analysis of the sales process, and the elements within the sales process, the associated activity within the sales process could be predicted or anticipated. I compared the activity of the ABB Distribution Automation Sales Unit (Vaasa, Finland) tow...

  6. An engineered approach to stem cell culture: automating the decision process for real-time adaptive subculture of stem cells.

    Directory of Open Access Journals (Sweden)

    Dai Fei Elmer Ker

    Full Text Available Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and objectively determine the appropriate time for cell passaging. Optimal stem cell culturing that maintains cell pluripotency while maximizing cell yields will be especially important for efficient, cost-effective stem cell-based therapies. Toward this goal we developed a real-time computer vision-based system that monitors the degree of cell confluency with a precision of 0.791±0.031 and recall of 0.559±0.043. The system consists of an automated phase-contrast time-lapse microscope and a server. Multiple dishes are sequentially imaged and the data is uploaded to the server that performs computer vision processing, predicts when cells will exceed a pre-defined threshold for optimal cell confluency, and provides a Web-based interface for remote cell culture monitoring. Human operators are also notified via text messaging and e-mail 4 hours prior to reaching this threshold and immediately upon reaching this threshold. This system was successfully used to direct the expansion of a paradigm stem cell population, C2C12 cells. Computer-directed and human-directed control subcultures required 3 serial cultures to achieve the theoretical target cell yield of 50 million C2C12 cells and showed no difference for myogenic and osteogenic differentiation. This automated vision-based system has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control, and it could be integrated with current and

  7. Analysis of the disagreement between automated bioluminescence-based and culture methods for detecting significant bacteriuria, with proposals for standardizing evaluations of bacteriuria detection methods.

    OpenAIRE

    Nichols, W. W.; Curtis, G D; Johnston, H H

    1982-01-01

    A fully automated method for detecting significant bacteriuria is described which uses firefly luciferin and luciferase to detect bacterial ATP in urine. The automated method was calibrated and evaluated, using 308 urine specimens, against two reference culture methods. We obtained a specificity of 0.79 and sensitivity of 0.75 using a quantitative pour plate reference test and a specificity of 0.79 and a sensitivity of 0.90 using a semiquantitative standard loop reference test. The majority o...

  8. Design and Performance of an Automated Bioreactor for Cell Culture Experiments in a Microgravity Environment

    Science.gov (United States)

    Kim, Youn-Kyu; Park, Seul-Hyun; Lee, Joo-Hee; Choi, Gi-Hyuk

    2015-03-01

    In this paper, we describe the development of a bioreactor for a cell-culture experiment on the International Space Station (ISS). The bioreactor is an experimental device for culturing mouse muscle cells in a microgravity environment. The purpose of the experiment was to assess the impact of microgravity on the muscles to address the possibility of longterm human residence in space. After investigation of previously developed bioreactors, and analysis of the requirements for microgravity cell culture experiments, a bioreactor design is herein proposed that is able to automatically culture 32 samples simultaneously. This reactor design is capable of automatic control of temperature, humidity, and culture-medium injection rate; and satisfies the interface requirements of the ISS. Since bioreactors are vulnerable to cell contamination, the medium-circulation modules were designed to be a completely replaceable, in order to reuse the bioreactor after each experiment. The bioreactor control system is designed to circulate culture media to 32 culture chambers at a maximum speed of 1 ml/min, to maintain the temperature of the reactor at 36°C, and to keep the relative humidity of the reactor above 70%. Because bubbles in the culture media negatively affect cell culture, a de-bubbler unit was provided to eliminate such bubbles. A working model of the reactor was built according to the new design, to verify its performance, and was used to perform a cell culture experiment that confirmed the feasibility of this device.

  9. The performance of fully automated urine analysis results for predicting the need of urine culture test

    Directory of Open Access Journals (Sweden)

    Hatice Yüksel

    2014-06-01

    Full Text Available Objectives: Urinalysis and urine culture are most common tests for diagnosis of urinary tract infections. The aim of our study is to examine the diagnostic performance of urine analysis and the role of urine analysis to determine the requirements for urine culture. Methods: Urine culture and urine analysis results of 362 patients were retrospectively analyzed. Culture results were taken as a reference for chemical and microscopic examination of urine and diagnostic accuracy of the test parameters, that may be a marker for urinary tract infection, and the performance of urine analysis were calculated for predicting the urine culture requirements. Results: A total of 362 urine culture results of patients were evaluated and 67% of them were negative. The results of leukocyte esterase and nitrite in chemical analysis and leukocytes and bacteria in microscopic analysis were normal in 50.4% of culture negative urines. In diagnostic accuracy calculations, leukocyte esterase (86.1% and microscopy leukocytes (88.0% were found with high sensitivity, nitrite (95.4% and bacteria (86.6% were found with high specificity. The area under the curve was calculated as 0.852 in ROC analysis for microscopic examination for leukocytes. Conclusion: Full-automatic urine devices can provide sufficient diagnostic accuracy for urine analysis. The evaluation of urine analysis results in an effective way can predict the necessity for urine culture requests and especially may contribute to a reduction in the work load and cost. J Clin Exp Invest 2014; 5 (2: 286-289

  10. Characterization and Classification of Adherent Cells in Monolayer Culture using Automated Tracking and Evolutionary Algorithms

    OpenAIRE

    Zhang, Z.; Bedder, M; Smith, S L; Walker, D; Shabir, S.; Southgate, J

    2016-01-01

    This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate normal human uro-epithelial cell cultures exposed to different concentrations of adenosine triphosphate (ATP) and a selective purinergic P2X antagonist (PPADS), acquired over a 24hour period. Subsequent analysis following feature extraction demonstrated the ability of the technique t...

  11. Attempts to Automate the Process of Generation of Orthoimages of Objects of Cultural Heritage

    Science.gov (United States)

    Markiewicz, J. S.; Podlasiak, P.; Zawieska, D.

    2015-02-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. The orthoimage is a cartometric form of photographic presentation of information in the two-dimensional reference system. The paper will discuss the issue of automation of the orthoimage generation basing on the TLS data and digital images. At present attempts are made to apply modern technologies not only for the needs of surveys, but also during the data processing. This paper will present attempts aiming at utilisation of appropriate algorithms and the author's application for automatic generation of the projection plane, for the needs of acquisition of intensity orthoimages from the TLS data. Such planes are defined manually in the majority of popular TLS data processing applications. A separate issue related to the RGB image generation is the orientation of digital images in relation to scans. It is important, in particular in such cases when scans and photographs are not taken simultaneously. This paper will present experiments concerning the utilisation of the SIFT algorithm for automatic matching of intensity orthoimages of the intensity and digital (RGB) photographs. Satisfactory results of the process of automation, as well as in relation to the quality of resulting orthoimages have been obtained.

  12. Evaluation of the Paratrend Multi-Analyte Sensor for Potential Utilization in Long-Duration Automated Cell Culture Monitoring

    Science.gov (United States)

    Hwang, Emma Y.; Pappas, Dimitri; Jeevarajan, Antony S.; Anderson, Melody M.

    2004-01-01

    BACKGROUND: Compact and automated sensors are desired for assessing the health of cell cultures in biotechnology experiments. While several single-analyte sensors exist to measure culture health, a multi-analyte sensor would simplify the cell culture system. One such multi-analyte sensor, the Paratrend 7 manufactured by Diametrics Medical, consists of three optical fibers for measuring pH, dissolved carbon dioxide (pCO(2)), dissolved oxygen (pO(2)), and a thermocouple to measure temperature. The sensor bundle was designed for intra-vascular measurements in clinical settings, and can be used in bioreactors operated both on the ground and in NASA's Space Shuttle and International Space Station (ISS) experiments. METHODS: A Paratrend 7 sensor was placed at the outlet of a bioreactor inoculated with BHK-21 (baby hamster kidney) cells. The pH, pCO(2), pO(2), and temperature data were transferred continuously to an external computer. Cell culture medium, manually extracted from the bioreactor through a sampling port, was also assayed using a bench top blood gas analyzer (BGA). RESULTS: Two Paratrend 7 sensors were used over a single cell culture experiment (64 days). When compared to the manually obtained BGA samples, the sensor had good agreement for pH, pCO(2), and pO(2) with bias (and precision) 0.005(0.024), 8.0 mmHg (4.4 mmHg), and 11 mmHg (17 mmHg), respectively for the first two sensors. A third Paratrend sensor (operated for 141 days) had similar agreement (0.02+/-0.15 for pH, -4+/-8 mm Hg for pCO(2), and 24+/-18 mmHg for pO(2)). CONCLUSION: The resulting biases and precisions are com- parable to Paratrend sensor clinical results. Although the pO(2) differences may be acceptable for clinically relevant measurement ranges, the O(2) sensor in this bundle may not be reliable enough for the ranges of pO(2) in these cell culture studies without periodic calibration.

  13. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    Directory of Open Access Journals (Sweden)

    Nuez Fernando

    2008-01-01

    Full Text Available Abstract Background Expressed sequence tag (EST collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http

  14. Automated Voxel Model from Point Clouds for Structural Analysis of Cultural Heritage

    Science.gov (United States)

    Bitelli, G.; Castellazzi, G.; D'Altri, A. M.; De Miranda, S.; Lambertini, A.; Selvaggi, I.

    2016-06-01

    In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM) of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy) that was hit by an earthquake in 2012.

  15. Scalable Transcriptome Preparation for Massive Parallel Sequencing

    OpenAIRE

    Henrik Stranneheim; Beata Werne; Ellen Sherwood; Joakim Lundeberg

    2011-01-01

    BACKGROUND: The tremendous output of massive parallel sequencing technologies requires automated robust and scalable sample preparation methods to fully exploit the new sequence capacity. METHODOLOGY: In this study, a method for automated library preparation of RNA prior to massively parallel sequencing is presented. The automated protocol uses precipitation onto carboxylic acid paramagnetic beads for purification and size selection of both RNA and DNA. The automated sample preparation was co...

  16. Scalable Transcriptome Preparation for Massive Parallel Sequencing

    OpenAIRE

    Stranneheim, Henrik; Werne, Beata; Sherwood, Ellen; Lundeberg, Joakim

    2011-01-01

    Background The tremendous output of massive parallel sequencing technologies requires automated robust and scalable sample preparation methods to fully exploit the new sequence capacity. Methodology In this study, a method for automated library preparation of RNA prior to massively parallel sequencing is presented. The automated protocol uses precipitation onto carboxylic acid paramagnetic beads for purification and size selection of both RNA and DNA. The automated sample preparation was comp...

  17. Accurate, precise modeling of cell proliferation kinetics from time-lapse imaging and automated image analysis of agar yeast culture arrays

    Directory of Open Access Journals (Sweden)

    Zhao Lue

    2007-01-01

    Full Text Available Abstract Background Genome-wide mutant strain collections have increased demand for high throughput cellular phenotyping (HTCP. For example, investigators use HTCP to investigate interactions between gene deletion mutations and additional chemical or genetic perturbations by assessing differences in cell proliferation among the collection of 5000 S. cerevisiae gene deletion strains. Such studies have thus far been predominantly qualitative, using agar cell arrays to subjectively score growth differences. Quantitative systems level analysis of gene interactions would be enabled by more precise HTCP methods, such as kinetic analysis of cell proliferation in liquid culture by optical density. However, requirements for processing liquid cultures make them relatively cumbersome and low throughput compared to agar. To improve HTCP performance and advance capabilities for quantifying interactions, YeastXtract software was developed for automated analysis of cell array images. Results YeastXtract software was developed for kinetic growth curve analysis of spotted agar cultures. The accuracy and precision for image analysis of agar culture arrays was comparable to OD measurements of liquid cultures. Using YeastXtract, image intensity vs. biomass of spot cultures was linearly correlated over two orders of magnitude. Thus cell proliferation could be measured over about seven generations, including four to five generations of relatively constant exponential phase growth. Spot area normalization reduced the variation in measurements of total growth efficiency. A growth model, based on the logistic function, increased precision and accuracy of maximum specific rate measurements, compared to empirical methods. The logistic function model was also more robust against data sparseness, meaning that less data was required to obtain accurate, precise, quantitative growth phenotypes. Conclusion Microbial cultures spotted onto agar media are widely used for genotype

  18. Organizational changes and automation: By means of a customer-oriented policy the so-called 'island culture' disappears: Part 2

    International Nuclear Information System (INIS)

    Automation offers great opportunities in the efforts of energy utilities in the Netherlands to reorganize towards more customer-oriented businesses. However, automation in itself is not enough. First, the organizational structure has to be changed considerably. Various energy utilities have already started on it. The restructuring principle is the same everywhere, but the way it is implemented differs widely. In this article attention is paid to different customer information systems. These systems can put an end to the so-called island culture within the energy utility organizations. The systems discussed are IRD of Systema and RIVA of SAP (both German software businesses), and two Dutch systems: Numis-2000 of Multihouse and KIS/400 of NUON Info-Systemen

  19. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    International Nuclear Information System (INIS)

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes common binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory[MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ)

  20. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina Data.

    Directory of Open Access Journals (Sweden)

    Mohan A V S K Katta

    Full Text Available Rapid popularity and adaptation of next generation sequencing (NGS approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1 (http://htslib.org, for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2, SNP calling (SAMtools and other utilities (bedtools towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina data.

  1. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data.

    Science.gov (United States)

    Katta, Mohan A V S K; Khan, Aamir W; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data. PMID:26460497

  2. Capillary electrophoresis for automated on-line monitoring of suspension cultures: Correlating cell density, nutrients and metabolites in near real-time.

    Science.gov (United States)

    Alhusban, Ala A; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M

    2016-05-12

    Increasingly stringent demands on the production of biopharmaceuticals demand monitoring of process parameters that impact on their quality. We developed an automated platform for on-line, near real-time monitoring of suspension cultures by integrating microfluidic components for cell counting and filtration with a high-resolution separation technique. This enabled the correlation of the growth of a human lymphocyte cell line with changes in the essential metabolic markers, glucose, glutamine, leucine/isoleucine and lactate, determined by Sequential Injection-Capillary Electrophoresis (SI-CE). Using 8.1 mL of media (41 μL per run), the metabolic status and cell density were recorded every 30 min over 4 days. The presented platform is flexible, simple and automated and allows for fast, robust and sensitive analysis with low sample consumption and high sample throughput. It is compatible with up- and out-scaling, and as such provides a promising new solution to meet the future demands in process monitoring in the biopharmaceutical industry. PMID:27114228

  3. Evaluation of an automated rapid diagnostic assay for detection of Gram-negative bacteria and their drug-resistance genes in positive blood cultures.

    Directory of Open Access Journals (Sweden)

    Masayoshi Tojo

    Full Text Available We evaluated the performance of the Verigene Gram-Negative Blood Culture Nucleic Acid Test (BC-GN; Nanosphere, Northbrook, IL, USA, an automated multiplex assay for rapid identification of positive blood cultures caused by 9 Gram-negative bacteria (GNB and for detection of 9 genes associated with β-lactam resistance. The BC-GN assay can be performed directly from positive blood cultures with 5 minutes of hands-on and 2 hours of run time per sample. A total of 397 GNB positive blood cultures were analyzed using the BC-GN assay. Of the 397 samples, 295 were simulated samples prepared by inoculating GNB into blood culture bottles, and the remaining were clinical samples from 102 patients with positive blood cultures. Aliquots of the positive blood cultures were tested by the BC-GN assay. The results of bacterial identification between the BC-GN assay and standard laboratory methods were as follows: Acinetobacter spp. (39 isolates for the BC-GN assay/39 for the standard methods, Citrobacter spp. (7/7, Escherichia coli (87/87, Klebsiella oxytoca (13/13, and Proteus spp. (11/11; Enterobacter spp. (29/30; Klebsiella pneumoniae (62/72; Pseudomonas aeruginosa (124/125; and Serratia marcescens (18/21; respectively. From the 102 clinical samples, 104 bacterial species were identified with the BC-GN assay, whereas 110 were identified with the standard methods. The BC-GN assay also detected all β-lactam resistance genes tested (233 genes, including 54 bla(CTX-M, 119 bla(IMP, 8 bla(KPC, 16 bla(NDM, 24 bla(OXA-23, 1 bla(OXA-24/40, 1 bla(OXA-48, 4 bla(OXA-58, and 6 blaVIM. The data shows that the BC-GN assay provides rapid detection of GNB and β-lactam resistance genes in positive blood cultures and has the potential to contributing to optimal patient management by earlier detection of major antimicrobial resistance genes.

  4. A comprehensive and precise quantification of the calanoid copepod Acartia tonsa (Dana) for intensive live feed cultures using an automated ZooImage system

    DEFF Research Database (Denmark)

    Vu, Minh Thi Thuy; Jepsen, Per Meyer; Hansen, Benni Winding

    2014-01-01

    ignored. In this study, we propose a novel method for highly precise classification of development stages and biomass of A. tonsa, in intensive live feed cultures, using an automated ZooImage system, a freeware image analysis. We successfully created a training set of 13 categories, including 7 copepod...... and 6 non-copepod (debris) groups. ZooImage used this training set for automatic discrimination through a random forest algorithm with the general accuracy of 92.8%. The ZooImage showed no significant difference in classifying solitary eggs, or mixed nauplii stages and copepodites compared to personal...... microscope observation. Furthermore, ZooImage was also adapted for automatic estimation of A. tonsa biomass. This is the first study that has successfully applied ZooImage software which enables fast and reliable quantification of the development stages and the biomass of A. tonsa. As a result, relevant...

  5. M2m Automation: Matlab-To-Map Reduce Automation

    Directory of Open Access Journals (Sweden)

    Archana C S

    2014-06-01

    Full Text Available Abstract- MapReduce is a very popular parallel programming model for cloud computing platforms, and has become an effective method for processing massive data by using a cluster of computers. Program language -to-MapReduce Automator is a possible solution to help traditional programmers easily deploy an application to cloud systems through translating sequential codes to MapReduce codes.M2M Automation mainly focuses on automating numerical computations by using hadoop at the back end. M2M automates Hadoop, for faster execution of Matlab commands using MapReduce code.

  6. Spheroid formation of human thyroid cancer cells in an automated culturing system during the Shenzhou-8 Space mission.

    Science.gov (United States)

    Pietsch, Jessica; Ma, Xiao; Wehland, Markus; Aleshcheva, Ganna; Schwarzwälder, Achim; Segerer, Jürgen; Birlem, Maria; Horn, Astrid; Bauer, Johann; Infanger, Manfred; Grimm, Daniela

    2013-10-01

    Human follicular thyroid cancer cells were cultured in Space to investigate the impact of microgravity on 3D growth. For this purpose, we designed and constructed a cell container that can endure enhanced physical forces, is connected to fluid storage chambers, performs media changes and cell harvesting automatically and supports cell viability. The container consists of a cell suspension chamber, two reserve tanks for medium and fixative and a pump for fluid exchange. The selected materials proved durable, non-cytotoxic, and did not inactivate RNAlater. This container was operated automatically during the unmanned Shenzhou-8 Space mission. FTC-133 human follicular thyroid cancer cells were cultured in Space for 10 days. Culture medium was exchanged after 5 days in Space and the cells were fixed after 10 days. The experiment revealed a scaffold-free formation of extraordinary large three-dimensional aggregates by thyroid cancer cells with altered expression of EGF and CTGF genes under real microgravity. PMID:23866977

  7. Automated Storage Retrieval System (ASRS) Role Towards Achievement of Safety Objective and Safety Culture in Radioactive Storage Facilities

    International Nuclear Information System (INIS)

    Waste Technology Development Centre (WasTeC) has been awarded with quality management system ISO 9001:2000 in June 2004 or now known as ISO 9001:2008. The scope of the unit's ISO certification is radioactive waste management and storage of radioactive material. To meet the objectives and requirements ISO 9001:2008, WasTeC has started a project known as Automated Storage and Retrieval System (ASRS). ASRS is a computing controlled method for automatically depositing and retrieving waste from defined locations. The system is used to replace the existing process of storage and retrieval of radioactive waste at storage facility at block 33.The main objective of this project is to reduced the radiation exposure to the worker and potential forklift accident occur during storage and retrieval of the radioactive waste. By using the ASRS system, WasTeC/ Nuclear Malaysia can provide a safe storage of radioactive waste and the use of this system can eliminate the repeat handling and can improve productivity. (author)

  8. Parallel Programming Environment for OpenMP

    OpenAIRE

    Insung Park; Michael J. Voss; Seon Wook Kim; Rudolf Eigenmann

    2001-01-01

    We present our effort to provide a comprehensive parallel programming environment for the OpenMP parallel directive language. This environment includes a parallel programming methodology for the OpenMP programming model and a set of tools (Ursa Minor and InterPol) that support this methodology. Our toolset provides automated and interactive assistance to parallel programmers in time-consuming tasks of the proposed methodology. The features provided by our tools include performance and program...

  9. 基于并行拣选的自动拣选系统品项拆分优化%SKU splitting simulation for automated picking system based on parallel picking strategy

    Institute of Scientific and Technical Information of China (English)

    吴颖颖; 吴耀华

    2012-01-01

    为提高自动拣选系统的工作效率,建立了基于并行拣选的品项拆分模型。该模型的优化目标是使每个拣选机可在订单合流前完成对货物的缓存。模型采用延迟因子表示订单合流时间与货物缓存时间的差值,并通过证明得出通道延迟因子与延迟时间具有相同变化趋势的结论。为求解模型,设计了基于延迟因子的启发式拆分算法。仿真结果显示,采用启发式算法可使拣选时间缩短8.55%~11.7%。%To improve the efficiency of the automated picking system,the Stock Keeping Unit(SKU) splitting model was built based on the parallel picking.The dispensers which could finish goods buffer before order form merge were optimized.The delay factor was proposed to represent the difference between merging time and dispensing time.It was proved the delay time and the delay time of each channel had the same variation trend.To solve the model,heuristic SKU splitting algorithm based on delay factor was designed.The simulation result indicated that the picking time could be shortened by 8.55%~11.7%.

  10. Rapid detection of significant bacteriuria by use of an automated Limulus amoebocyte lysate assay.

    OpenAIRE

    Jorgensen, J H; Alexander, G A

    1982-01-01

    Previous studies have demonstrated that significant gram-negative bacteriuria can be detected by using the Limulus amoebocyte lysate test. A series of 580 urine specimens were tested in parallel with the automated MS-2 (Abbott Laboratories) assay and with quantitative urine bacterial cultures. The overall ability of the MS-2 Limulus amoebocyte lysate test to correctly classify urine specimens as containing either greater than or equal to 10(5) organisms or less than 10(5) organisms per ml dur...

  11. Multiple Microfermentor Battery: a Versatile Tool for Use with Automated Parallel Cultures of Microorganisms Producing Recombinant Proteins and for Optimization of Cultivation Protocols

    OpenAIRE

    Frachon, Emmanuel; Bondet, Vincent; Munier-Lehmann, Hélène; Bellalou, Jacques

    2006-01-01

    A multiple microfermentor battery was designed for high-throughput recombinant protein production in Escherichia coli. This novel system comprises eight aerated glass reactors with a working volume of 80 ml and a moving external optical sensor for measuring optical densities at 600 nm (OD600) ranging from 0.05 to 100 online. Each reactor can be fitted with miniature probes to monitor temperature, dissolved oxygen (DO), and pH. Independent temperature regulation for each vessel is obtained wit...

  12. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  13. Application of selective hydride generation-automated cryotrapping, gas chromatography, AAS to speciation analysis of methylated arsenicals in water and cell cultures at sub-PPB levels

    International Nuclear Information System (INIS)

    Complete text of publication follows. The speciation analysis methods based on selective hydride generation- cryotrapping- gas chromatography- AAS with multiatomizer represents a viable alternative and complementary technique to approach based on separation technique (most often liquid chromatography) connected to an ICP-MS detector. HG based methods are not confined to minute sample volume and usually do not require extraction step. Therefore excellent limits of detection can be achieved with relatively simple and inexpensive instrumentation, and risk of alteration of speciation due to sample pretreatment is minimized. Only species forming volatile hydrides are available for analysis, i.e. tri- and pentavalent forms of inorganic, mono-, di- and trimethylated compounds in the case of arsenic. Since exactly these species are found in human detoxication metabolism of iAs, this method is very suitable for toxicological studies. Application of a fully automated system including the cryotrapping step will be presented. Limits of detection of the method were 21 ppt for iAs (limited by blanks) and 3-10 ppt for methylated forms (limited by signal to noise ratio). Sample throughput was approximately 8 per hour. Analytical performance of the system will be demonstrated on speciation analysis of methylated arsenicals in water reference materials with total arsenic content of 0.7-1.3 ppb. Second example is analysis of arsenic species in cell culture experiments, when the methylating cells were exposed to iAs at 0.25-0.5 μM levels. Methylated species transformed by the cells are then determined in cell lysates and cell culture medium. Notably, all forms exhibit the same sensitivity and therefore can be calibrated against single stable As form. The authors kindly acknowledge the financial support from University of North Carolina at Chapel Hill- Gillings Innovation Laboratory, Academy of Sciences of the Czech Republic (Institutional research plan No. AV0Z 40310501), Czech

  14. Parallelizing Mizar

    CERN Document Server

    Urban, Josef

    2012-01-01

    This paper surveys and describes the implementation of parallelization of the Mizar proof checking and of related Mizar utilities. The implementation makes use of Mizar's compiler-like division into several relatively independent passes, with typically quite different processing speeds. The information produced in earlier (typically much faster) passes can be used to parallelize the later (typically much slower) passes. The parallelization now works by splitting the formalization into a suitable number of pieces that are processed in parallel, assembling from them together the required results. The implementation is evaluated on examples from the Mizar library, and future extensions are discussed.

  15. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  16. Evaluation of 3 automated real-time PCR (Xpert C. difficile assay, BD MAX Cdiff, and IMDx C. difficile for Abbott m2000 assay) for detecting Clostridium difficile toxin gene compared to toxigenic culture in stool specimens.

    Science.gov (United States)

    Yoo, Jaeeun; Lee, Hyeyoung; Park, Kang Gyun; Lee, Gun Dong; Park, Yong Gyu; Park, Yeon-Joon

    2015-09-01

    We evaluated the performance of the 3 automated systems (Cepheid Xpert, BD MAX, and IMDx C. difficile for Abbott m2000) detecting Clostridium difficile toxin gene compared to toxigenic culture. Of the 254 stool specimens tested, 87 (48 slight, 35 moderate, and 4 heavy growth) were toxigenic culture positive. The overall sensitivities and specificities were 82.8% and 98.8% for Xpert, 81.6% and 95.8% for BD MAX, and 62.1% and 99.4% for IMDx, respectively. The specificity was significantly higher in IMDx than BD MAX (P= 0.03). All stool samples underwent toxin A/B enzyme immunoassay testing, and of the 254 samples, only 29 samples were positive and 2 of them were toxigenic culture negative. Considering the rapidity and high specificity of the real-time PCR assays compared to the toxigenic culture, they can be used as the first test method for C. difficile infection/colonization. PMID:26081240

  17. Parallel quicksort

    Energy Technology Data Exchange (ETDEWEB)

    Vrto, I. (Inst. of Technical Cybernetics, Slovac Academy of Sciences, Dubravska Cesta 9, 842-37 Bratislava (CS)); Chelbus, B.S. (Dept. of Computer Science, Univ. of California, Riverside, CA (US))

    1991-04-01

    This paper reports on the development of a parallel version of quicksort on a CRCW PRAM. The algorithm uses n processors and a linear space to sort n keys in the expected time O(log n) with large probability.

  18. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  19. Computer automation and artificial intelligence

    International Nuclear Information System (INIS)

    Rapid advances in computing, resulting from micro chip revolution has increased its application manifold particularly for computer automation. Yet the level of automation available, has limited its application to more complex and dynamic systems which require an intelligent computer control. In this paper a review of Artificial intelligence techniques used to augment automation is presented. The current sequential processing approach usually adopted in artificial intelligence has succeeded in emulating the symbolic processing part of intelligence, but the processing power required to get more elusive aspects of intelligence leads towards parallel processing. An overview of parallel processing with emphasis on transputer is also provided. A Fuzzy knowledge based controller for amination drug delivery in muscle relaxant anesthesia on transputer is described. 4 figs. (author)

  20. Process automation

    International Nuclear Information System (INIS)

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  1. cultural

    Directory of Open Access Journals (Sweden)

    Irene Kreutz

    2006-01-01

    Full Text Available Es un estudio cualitativo que adoptó como referencial teorico-motodológico la antropología y la etnografía. Presenta las experiencias vivenciadas por mujeres de una comunidad en el proceso salud-enfermedad, con el objetivo de comprender los determinantes sócio-culturales e históricos de las prácticas de prevención y tratamiento adoptados por el grupo cultural por medio de la entrevista semi-estructurada. Los temas que emergieron fueron: la relación entre la alimentación y lo proceso salud-enfermedad, las relaciones con el sistema de salud oficial y el proceso salud-enfermedad y lo sobrenatural. Los dados revelaron que los moradores de la comunidad investigada tienen un modo particular de explicar sus procedimientos terapéuticos. Consideramos que es papel de los profesionales de la salud en sus prácticas, la adopción de abordajes o enfoques que consideren al individuo en su dimensión sócio-cultural e histórica, considerando la enorme diversidad cultural en nuestro país.

  2. Automation of antimicrobial activity screening.

    Science.gov (United States)

    Forry, Samuel P; Madonna, Megan C; López-Pérez, Daneli; Lin, Nancy J; Pasco, Madeleine D

    2016-03-01

    Manual and automated methods were compared for routine screening of compounds for antimicrobial activity. Automation generally accelerated assays and required less user intervention while producing comparable results. Automated protocols were validated for planktonic, biofilm, and agar cultures of the oral microbe Streptococcus mutans that is commonly associated with tooth decay. Toxicity assays for the known antimicrobial compound cetylpyridinium chloride (CPC) were validated against planktonic, biofilm forming, and 24 h biofilm culture conditions, and several commonly reported toxicity/antimicrobial activity measures were evaluated: the 50 % inhibitory concentration (IC50), the minimum inhibitory concentration (MIC), and the minimum bactericidal concentration (MBC). Using automated methods, three halide salts of cetylpyridinium (CPC, CPB, CPI) were rapidly screened with no detectable effect of the counter ion on antimicrobial activity. PMID:26970766

  3. Automated Microbial Metabolism Laboratory

    Science.gov (United States)

    1973-01-01

    Development of the automated microbial metabolism laboratory (AMML) concept is reported. The focus of effort of AMML was on the advanced labeled release experiment. Labeled substrates, inhibitors, and temperatures were investigated to establish a comparative biochemical profile. Profiles at three time intervals on soil and pure cultures of bacteria isolated from soil were prepared to establish a complete library. The development of a strategy for the return of a soil sample from Mars is also reported.

  4. PARALLEL STABILIZATION

    Institute of Scientific and Technical Information of China (English)

    J.L.LIONS

    1999-01-01

    A new algorithm for the stabilization of (possibly turbulent, chaotic) distributed systems, governed by linear or non linear systems of equations is presented. The SPA (Stabilization Parallel Algorithm) is based on a systematic parallel decomposition of the problem (related to arbitrarily overlapping decomposition of domains) and on a penalty argument. SPA is presented here for the case of linear parabolic equations: with distrjbuted or boundary control. It extends to practically all linear and non linear evolution equations, as it will be presented in several other publications.

  5. Automation of cell line development

    OpenAIRE

    Lindgren, Kristina; Salmén, Andréa; Lundgren, Mats; Bylund, Lovisa; Ebler, Åsa; Fäldt, Eric; Sörvik, Lina; Fenge, Christel; Skoging-Nyberg, Ulrica

    2009-01-01

    An automated platform for development of high producing cell lines for biopharmaceutical production has been established in order to increase throughput and reduce development costs. The concept is based on the Cello robotic system (The Automation Partnership) and covers screening for colonies and expansion of static cultures. In this study, the glutamine synthetase expression system (Lonza Biologics) for production of therapeutic monoclonal antibodies in Chinese hamster ovary cells was used ...

  6. An automated HIV-1 Env-pseudotyped virus production for global HIV vaccine trials.

    Directory of Open Access Journals (Sweden)

    Anke Schultz

    Full Text Available BACKGROUND: Infections with HIV still represent a major human health problem worldwide and a vaccine is the only long-term option to fight efficiently against this virus. Standardized assessments of HIV-specific immune responses in vaccine trials are essential for prioritizing vaccine candidates in preclinical and clinical stages of development. With respect to neutralizing antibodies, assays with HIV-1 Env-pseudotyped viruses are a high priority. To cover the increasing demands of HIV pseudoviruses, a complete cell culture and transfection automation system has been developed. METHODOLOGY/PRINCIPAL FINDINGS: The automation system for HIV pseudovirus production comprises a modified Tecan-based Cellerity system. It covers an area of 5×3 meters and includes a robot platform, a cell counting machine, a CO(2 incubator for cell cultivation and a media refrigerator. The processes for cell handling, transfection and pseudovirus production have been implemented according to manual standard operating procedures and are controlled and scheduled autonomously by the system. The system is housed in a biosafety level II cabinet that guarantees protection of personnel, environment and the product. HIV pseudovirus stocks in a scale from 140 ml to 1000 ml have been produced on the automated system. Parallel manual production of HIV pseudoviruses and comparisons (bridging assays confirmed that the automated produced pseudoviruses were of equivalent quality as those produced manually. In addition, the automated method was fully validated according to Good Clinical Laboratory Practice (GCLP guidelines, including the validation parameters accuracy, precision, robustness and specificity. CONCLUSIONS: An automated HIV pseudovirus production system has been successfully established. It allows the high quality production of HIV pseudoviruses under GCLP conditions. In its present form, the installed module enables the production of 1000 ml of virus-containing cell

  7. Robotic platform for parallelized cultivation and monitoring of microbial growth parameters in microwell plates.

    Science.gov (United States)

    Knepper, Andreas; Heiser, Michael; Glauche, Florian; Neubauer, Peter

    2014-12-01

    The enormous variation possibilities of bioprocesses challenge process development to fix a commercial process with respect to costs and time. Although some cultivation systems and some devices for unit operations combine the latest technology on miniaturization, parallelization, and sensing, the degree of automation in upstream and downstream bioprocess development is still limited to single steps. We aim to face this challenge by an interdisciplinary approach to significantly shorten development times and costs. As a first step, we scaled down analytical assays to the microliter scale and created automated procedures for starting the cultivation and monitoring the optical density (OD), pH, concentrations of glucose and acetate in the culture medium, and product formation in fed-batch cultures in the 96-well format. Then, the separate measurements of pH, OD, and concentrations of acetate and glucose were combined to one method. This method enables automated process monitoring at dedicated intervals (e.g., also during the night). By this approach, we managed to increase the information content of cultivations in 96-microwell plates, thus turning them into a suitable tool for high-throughput bioprocess development. Here, we present the flowcharts as well as cultivation data of our automation approach. PMID:25208534

  8. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  9. Towards Distributed Memory Parallel Program Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  10. Towards Automated Testing of Web Service Choreographies

    OpenAIRE

    Besson F.M.; Leal P.M.B.; Kon F.; Goldman A; Milojicic D.

    2011-01-01

    Web service choreographies have been proposed as a decentralized scalable way of composing services in a SOA environment. In spite of all the benefi ts of choreographies, the decentralized ow of information, the parallelism, and multiple party communication restrict the automated testing of choreographies at design and runtime. The goal of our research is to adapt the automated testing techniques used by the Agile Software Development community to the SOA context. To achieve that, we seek to ...

  11. Automation Security

    OpenAIRE

    Mirzoev, Dr. Timur

    2014-01-01

    Web-based Automated Process Control systems are a new type of applications that use the Internet to control industrial processes with the access to the real-time data. Supervisory control and data acquisition (SCADA) networks contain computers and applications that perform key functions in providing essential services and commodities (e.g., electricity, natural gas, gasoline, water, waste treatment, transportation) to all Americans. As such, they are part of the nation s critical infrastructu...

  12. Automated recognition of cell phenotypes in histology images based on membrane- and nuclei-targeting biomarkers

    International Nuclear Information System (INIS)

    Three-dimensional in vitro culture of cancer cells are used to predict the effects of prospective anti-cancer drugs in vivo. In this study, we present an automated image analysis protocol for detailed morphological protein marker profiling of tumoroid cross section images. Histologic cross sections of breast tumoroids developed in co-culture suspensions of breast cancer cell lines, stained for E-cadherin and progesterone receptor, were digitized and pixels in these images were classified into five categories using k-means clustering. Automated segmentation was used to identify image regions composed of cells expressing a given biomarker. Synthesized images were created to check the accuracy of the image processing system. Accuracy of automated segmentation was over 95% in identifying regions of interest in synthesized images. Image analysis of adjacent histology slides stained, respectively, for Ecad and PR, accurately predicted regions of different cell phenotypes. Image analysis of tumoroid cross sections from different tumoroids obtained under the same co-culture conditions indicated the variation of cellular composition from one tumoroid to another. Variations in the compositions of cross sections obtained from the same tumoroid were established by parallel analysis of Ecad and PR-stained cross section images. Proposed image analysis methods offer standardized high throughput profiling of molecular anatomy of tumoroids based on both membrane and nuclei markers that is suitable to rapid large scale investigations of anti-cancer compounds for drug development

  13. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  14. Automating the multiprocessing environment

    Science.gov (United States)

    Arpasi, Dale J.

    1989-01-01

    An approach to automate the programming and operation of tree-structured networks of multiprocessor systems is discussed. A conceptual, knowledge-based operating environment is presented, and requirements for two major technology elements are identified as follows: (1) An intelligent information translator is proposed for implementating information transfer between dissimilar hardware and software, thereby enabling independent and modular development of future systems and promoting a language-independence of codes and information; (2) A resident system activity manager, which recognizes the systems capabilities and monitors the status of all systems within the environment, is proposed for integrating dissimilar systems into effective parallel processing resources to optimally meet user needs. Finally, key computational capabilities which must be provided before the environment can be realized are identified.

  15. Study on Parallel Computing

    Institute of Scientific and Technical Information of China (English)

    Guo-Liang Chen; Guang-Zhong Sun; Yun-Quan Zhang; Ze-Yao Mo

    2006-01-01

    In this paper, we present a general survey on parallel computing. The main contents include parallel computer system which is the hardware platform of parallel computing, parallel algorithm which is the theoretical base of parallel computing, parallel programming which is the software support of parallel computing. After that, we also introduce some parallel applications and enabling technologies. We argue that parallel computing research should form an integrated methodology of "architecture - algorithm - programming - application". Only in this way, parallel computing research becomes continuous development and more realistic.

  16. Acquisition of data from on-line laser turbidimeter and calculation of some kinetic variables in computer-coupled automated fed-batch culture

    International Nuclear Information System (INIS)

    Output signals of a commercially available on-line laser turbidimeter exhibit fluctuations due to air and/or CO2 bubbles. A simple data processing algorithm and a personal computer software have been developed to smooth the noisy turbidity data acquired, and to utilize them for the on-line calculations of some kinetic variables involved in batch and fed-batch cultures of uniformly dispersed microorganisms. With this software, about 103 instantaneous turbidity data acquired over 55 s are averaged and convert it to dry cell concentration, X, every minute. Also, volume of the culture broth, V, is estimated from the averaged output data of weight loss of feed solution reservoir, W, using an electronic balance on which the reservoir is placed. Then, the computer software is used to perform linear regression analyses over the past 30 min of the total biomass, VX, the natural logarithm of the total biomass, ln(VX), and the weight loss, W, in order to calculate volumetric growth rate, d(VX)/dt, specific growth rate, μ [ = dln(VX)/dt] and the rate of W, dW/dt, every minute in a fed-batch culture. The software used to perform the first-order regression analyses of VX, ln(VX) and W was applied to batch or fed-batch cultures of Escherichia coli on minimum synthetic or natural complex media. Sample determination coefficients of the three different variables (VX, ln(VX) and W) were close to unity, indicating that the calculations are accurate. Furthermore, growth yield, Yx/s, and specific substrate consumption rate, qsc, were approximately estimated from the data, dW/dt and in a ‘balanced’ fed-batch culture of E. coli on the minimum synthetic medium where the computer-aided substrate-feeding system automatically matches well with the cell growth. (author)

  17. A Performance Analysis Tool for PVM Parallel Programs

    Institute of Scientific and Technical Information of China (English)

    Chen Wang; Yin Liu; Changjun Jiang; Zhaoqing Zhang

    2004-01-01

    In this paper,we introduce the design and implementation of ParaVT,which is a visual performance analysis and parallel debugging tool.In ParaVT,we propose an automated instrumentation mechanism. Based on this mechanism,ParaVT automatically analyzes the performance bottleneck of parallel applications and provides a visual user interface to monitor and analyze the performance of parallel programs.In addition ,it also supports certain extensions.

  18. Automated Budget System

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  19. Special parallel processing workshop

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  20. Manufacturing and automation

    OpenAIRE

    Ernesto Córdoba Nieto

    2010-01-01

    The article presents concepts and definitions from different sources concerning automation. The work approaches automation by virtue of the author’s experience in manufacturing production; why and how automation prolects are embarked upon is considered. Technological reflection regarding the progressive advances or stages of automation in the production area is stressed. Coriat and Freyssenet’s thoughts about and approaches to the problem of automation and its current state are taken and e...

  1. Automation tools for flexible aircraft maintenance.

    Energy Technology Data Exchange (ETDEWEB)

    Prentice, William J.; Drotning, William D.; Watterberg, Peter A.; Loucks, Clifford S.; Kozlowski, David M.

    2003-11-01

    This report summarizes the accomplishments of the Laboratory Directed Research and Development (LDRD) project 26546 at Sandia, during the period FY01 through FY03. The project team visited four DoD depots that support extensive aircraft maintenance in order to understand critical needs for automation, and to identify maintenance processes for potential automation or integration opportunities. From the visits, the team identified technology needs and application issues, as well as non-technical drivers that influence the application of automation in depot maintenance of aircraft. Software tools for automation facility design analysis were developed, improved, extended, and integrated to encompass greater breadth for eventual application as a generalized design tool. The design tools for automated path planning and path generation have been enhanced to incorporate those complex robot systems with redundant joint configurations, which are likely candidate designs for a complex aircraft maintenance facility. A prototype force-controlled actively compliant end-effector was designed and developed based on a parallel kinematic mechanism design. This device was developed for demonstration of surface finishing, one of many in-contact operations performed during aircraft maintenance. This end-effector tool was positioned along the workpiece by a robot manipulator, programmed for operation by the automated planning tools integrated for this project. Together, the hardware and software tools demonstrate many of the technologies required for flexible automation in a maintenance facility.

  2. Logical Inference Techniques for Loop Parallelization

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the paralleliza...... of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECT-CLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers....

  3. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  4. Parallelism in Constraint Programming

    OpenAIRE

    Rolf, Carl Christian

    2011-01-01

    Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard even in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent. Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a...

  5. Manufacturing and automation

    Directory of Open Access Journals (Sweden)

    Ernesto Córdoba Nieto

    2010-04-01

    Full Text Available The article presents concepts and definitions from different sources concerning automation. The work approaches automation by virtue of the author’s experience in manufacturing production; why and how automation prolects are embarked upon is considered. Technological reflection regarding the progressive advances or stages of automation in the production area is stressed. Coriat and Freyssenet’s thoughts about and approaches to the problem of automation and its current state are taken and examined, especially that referring to the problem’s relationship with reconciling the level of automation with the flexibility and productivity demanded by competitive, worldwide manufacturing.

  6. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  7. An automated swimming respirometer

    DEFF Research Database (Denmark)

    STEFFENSEN, JF; JOHANSEN, K; BUSHNELL, PG

    1984-01-01

    An automated respirometer is described that can be used for computerized respirometry of trout and sharks.......An automated respirometer is described that can be used for computerized respirometry of trout and sharks....

  8. Configuration Management Automation (CMA)

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  9. Intensive Culture: Religion and Social Theory in Contemporary Culture

    OpenAIRE

    Lash, Scott

    2010-01-01

    Contemporary culture, today’s capitalism - our global information society - is ever expanding, is ever more extensive. And yet we seem to be experiencing a parallel phenomenon which can only be characterized as intensive.This book is dedicated to the study of such intensive culture. While extensive culture is a culture of the same: a culture of fixed equivalence; intensive culture is a culture of difference, of in-equivalence – the singular. Intensities generate what we encounter. They are vi...

  10. Parallel flow diffusion battery

    Science.gov (United States)

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  11. Workflow automation architecture standard

    Energy Technology Data Exchange (ETDEWEB)

    Moshofsky, R.P.; Rohen, W.T. [Boeing Computer Services Co., Richland, WA (United States)

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  12. Parallel logic programming systems

    OpenAIRE

    Chassin De Kergommeaux, J.; Codognet, Philippe

    1992-01-01

    Parallelizing logic programming has attracted much interest in the research community, because of the intrinsic or and and parallelisms of logic programs. One research stream aims at transparent exploitation of parallelism in existing logic programming languages such as Prolog while the family of concurrent logic languages develops constructs allowing programmers to express the concurrency, that is the communication and synchronization between parallel process, inside their algorithms. This p...

  13. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  14. Introduction to parallel programming

    CERN Document Server

    Brawer, Steven

    1989-01-01

    Introduction to Parallel Programming focuses on the techniques, processes, methodologies, and approaches involved in parallel programming. The book first offers information on Fortran, hardware and operating system models, and processes, shared memory, and simple parallel programs. Discussions focus on processes and processors, joining processes, shared memory, time-sharing with multiple processors, hardware, loops, passing arguments in function/subroutine calls, program structure, and arithmetic expressions. The text then elaborates on basic parallel programming techniques, barriers and race

  15. Parallel adaptive wavelet collocation method for PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com [FortiVenti Inc., Suite 404, 999 Canada Place, Vancouver, BC, V6C 3E2 (Canada); Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States); Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States); Vasilyev, Oleg V., E-mail: Oleg.Vasilyev@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States)

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  16. Developing Parallel Programs

    OpenAIRE

    Ranjan Sen

    2012-01-01

    Parallel programming is an extension of sequential programming; today, it is becoming the mainstream paradigm in day-to-day information processing. Its aim is to build the fastest programs on parallel computers. The methodologies for developing a parallelprogram can be put into integrated frameworks. Development focuses on algorithm, languages, and how the program is deployed on the parallel computer.

  17. CNTFET Parallel in Parallel out Shift Register

    Directory of Open Access Journals (Sweden)

    T. Jayanthy

    Full Text Available In this paper, a compact model for carbon nanotube field effect transistor has been designed by considering various device parameters such as length, number of tubes, chiral vector etc. The modeled CNTFET is used to design various digital circuits in particular parallel in parallel out shift register. The results of Hspice simulation performed on the designed PIPO shift register shows superior performance over conventional MOSFET in terms of power dissipation, power delay product, size etc.

  18. Automated Parallel Computing Tools for Multicore Machines and Clusters Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to improve productivity of high performance computing for applications on multicore computers and clusters. These machines built from one or more chips...

  19. Shoe-String Automation

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, M.L.

    2001-07-30

    Faced with a downsizing organization, serious budget reductions and retirement of key metrology personnel, maintaining capabilities to provide necessary services to our customers was becoming increasingly difficult. It appeared that the only solution was to automate some of our more personnel-intensive processes; however, it was crucial that the most personnel-intensive candidate process be automated, at the lowest price possible and with the lowest risk of failure. This discussion relates factors in the selection of the Standard Leak Calibration System for automation, the methods of automation used to provide the lowest-cost solution and the benefits realized as a result of the automation.

  20. Software Test Automation in Practice: Empirical Observations

    Directory of Open Access Journals (Sweden)

    Jussi Kasurinen

    2010-01-01

    Full Text Available The objective of this industry study is to shed light on the current situation and improvement needs in software test automation. To this end, 55 industry specialists from 31 organizational units were interviewed. In parallel with the survey, a qualitative study was conducted in 12 selected software development organizations. The results indicated that the software testing processes usually follow systematic methods to a large degree, and have only little immediate or critical requirements for resources. Based on the results, the testing processes have approximately three fourths of the resources they need, and have access to a limited, but usually sufficient, group of testing tools. As for the test automation, the situation is not as straightforward: based on our study, the applicability of test automation is still limited and its adaptation to testing contains practical difficulties in usability. In this study, we analyze and discuss these limitations and difficulties.

  1. Software Test Automation in Practice: Empirical Observations

    OpenAIRE

    Kari Smolander; Ossi Taipale; Jussi Kasurinen

    2010-01-01

    The objective of this industry study is to shed light on the current situation and improvement needs in software test automation. To this end, 55 industry specialists from 31 organizational units were interviewed. In parallel with the survey, a qualitative study was conducted in 12 selected software development organizations. The results indicated that the software testing processes usually follow systematic methods to a large degree, and have only little immediate or critical requirements fo...

  2. Parallel Atomistic Simulations

    Energy Technology Data Exchange (ETDEWEB)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  3. Invariants for Parallel Mapping

    Institute of Scientific and Technical Information of China (English)

    YIN Yajun; WU Jiye; FAN Qinshan; HUANG Kezhi

    2009-01-01

    This paper analyzes the geometric quantities that remain unchanged during parallel mapping (i.e., mapping from a reference curved surface to a parallel surface with identical normal direction). The second gradient operator, the second class of integral theorems, the Gauss-curvature-based integral theorems, and the core property of parallel mapping are used to derive a series of parallel mapping invadants or geometri-cally conserved quantities. These include not only local mapping invadants but also global mapping invari-ants found to exist both in a curved surface and along curves on the curved surface. The parallel mapping invadants are used to identify important transformations between the reference surface and parallel surfaces. These mapping invadants and transformations have potential applications in geometry, physics, biome-chanics, and mechanics in which various dynamic processes occur along or between parallel surfaces.

  4. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  5. Parallel digital forensics infrastructure.

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  6. Parallelization in Modern C++

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  7. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  8. Explicit parallel programming

    OpenAIRE

    Gamble, James Graham

    1990-01-01

    While many parallel programming languages exist, they rarely address programming languages from the issue of communication (implying expressability, and readability). A new language called Explicit Parallel Programming (EPP), attempts to provide this quality by separating the responsibility for the execution of run time actions from the responsibility for deciding the order in which they occur. The ordering of a parallel algorithm is specified in the new EPP language; run ti...

  9. Parallel Online Learning

    OpenAIRE

    Hsu, Daniel; Karampatziakis, Nikos; Langford, John; Smola, Alex

    2011-01-01

    In this work we study parallelization of online learning, a core primitive in machine learning. In a parallel environment all known approaches for parallel online learning lead to delayed updates, where the model is updated using out-of-date information. In the worst case, or when examples are temporally correlated, delay can have a very adverse effect on the learning algorithm. Here, we analyze and present preliminary empirical results on a set of learning architectures based on a feature sh...

  10. Programming Parallel Computers

    OpenAIRE

    Chandy, K. Mani

    1988-01-01

    This paper is from a keynote address to the IEEE International Conference on Computer Languages, October 9, 1988. Keynote addresses are expected to be provocative (and perhaps even entertaining), but not necessarily scholarly. The reader should be warned that this talk was prepared with these expectations in mind.Parallel computers offer the potential of great speed at low cost. The promise of parallelism is limited by the ability to program parallel machines effectively. This paper explores ...

  11. Practical Parallel Rendering

    CERN Document Server

    Chalmers, Alan

    2002-01-01

    Meeting the growing demands for speed and quality in rendering computer graphics images requires new techniques. Practical parallel rendering provides one of the most practical solutions. This book addresses the basic issues of rendering within a parallel or distributed computing environment, and considers the strengths and weaknesses of multiprocessor machines and networked render farms for graphics rendering. Case studies of working applications demonstrate, in detail, practical ways of dealing with complex issues involved in parallel processing.

  12. Approach of generating parallel programs from parallelized algorithm design strategies

    Institute of Scientific and Technical Information of China (English)

    WAN Jian-yi; LI Xiao-ying

    2008-01-01

    Today, parallel programming is dominated by message passing libraries, such as message passing interface (MPI). This article intends to simplify parallel programming by generating parallel programs from parallelized algorithm design strategies. It uses skeletons to abstract parallelized algorithm design strategies, as well as parallel architectures. Starting from problem specification, an abstract parallel abstract programming language+ (Apla+) program is generated from parallelized algorithm design strategies and problem-specific function definitions. By combining with parallel architectures, implicity of parallelism inside the parallelized algorithm design strategies is exploited. With implementation and transformation, C++ and parallel virtual machine (CPPVM) parallel program is finally generated. Parallelized branch and bound (B&B) algorithm design strategy and parallelized divide and conquer (D & C) algorithm design strategy are studied in this article as examples. And it also illustrates the approach with a case study.

  13. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  14. Parallel Online Learning

    CERN Document Server

    Hsu, Daniel; Langford, John; Smola, Alex

    2011-01-01

    In this work we study parallelization of online learning, a core primitive in machine learning. In a parallel environment all known approaches for parallel online learning lead to delayed updates, where the model is updated using out-of-date information. In the worst case, or when examples are temporally correlated, delay can have a very adverse effect on the learning algorithm. Here, we analyze and present preliminary empirical results on a set of learning architectures based on a feature sharding approach that present various tradeoffs between delay, degree of parallelism, representation power and empirical performance.

  15. CS-Studio Scan System Parallelization

    Energy Technology Data Exchange (ETDEWEB)

    Kasemir, Kay [ORNL; Pearson, Matthew R [ORNL

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  16. Automated stopcock actuator

    OpenAIRE

    Vandehey, N. T.; O'Neil, J.P.

    2015-01-01

    Introduction We have developed a low-cost stopcock valve actuator for radiochemistry automation built using a stepper motor and an Arduino, an open-source single-board microcontroller. The con-troller hardware can be programmed to run by serial communication or via two 5–24 V digital lines for simple integration into any automation control system. This valve actuator allows for automated use of a single, disposable stopcock, providing a number of advantages over stopcock manifold systems ...

  17. The Adaptive Automation Design

    OpenAIRE

    Calefato, Caterina; Montanari, Roberto; TESAURI, Francesco

    2008-01-01

    After considering the positive effects of adaptive automation implementation, this chapter focuses on two partly overlapping phenomena: on the one hand, the role of trust in automation is considered, particularly as to the effects of overtrust and mistrust in automation's reliability; on the other hand, long-term lack of exercise on specific operation may lead users to skill deterioration. As a future work, it will be interesting and challenging to explore the conjunction of adaptive automati...

  18. Service functional test automation

    OpenAIRE

    Hillah, Lom Messan; Maesano, Ariele-Paolo; Rosa, Fabio; Maesano, Libero; Lettere, Marco; Fontanelli, Riccardo

    2015-01-01

    This paper presents the automation of the functional test of services (black-box testing) and services architectures (grey-box testing) that has been developed by the MIDAS project and is accessible on the MIDAS SaaS. In particular, the paper illustrates the solutions of tough functional test automation problems such as: (i) the configuration of the automated test execution system against large and complex services architectures, (ii) the constraint-based test input generation, (iii) the spec...

  19. Automated Weather Observing System

    Data.gov (United States)

    Department of Transportation — The Automated Weather Observing System (AWOS) is a suite of sensors, which measure, collect, and disseminate weather data to help meteorologists, pilots, and flight...

  20. Laboratory Automation and Middleware.

    Science.gov (United States)

    Riben, Michael

    2015-06-01

    The practice of surgical pathology is under constant pressure to deliver the highest quality of service, reduce errors, increase throughput, and decrease turnaround time while at the same time dealing with an aging workforce, increasing financial constraints, and economic uncertainty. Although not able to implement total laboratory automation, great progress continues to be made in workstation automation in all areas of the pathology laboratory. This report highlights the benefits and challenges of pathology automation, reviews middleware and its use to facilitate automation, and reviews the progress so far in the anatomic pathology laboratory. PMID:26065792

  1. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  2. Parallel reservoir simulator computations

    International Nuclear Information System (INIS)

    The adaptation of a reservoir simulator for parallel computations is described. The simulator was originally designed for vector processors. It performs approximately 99% of its calculations in vector/parallel mode and relative to scalar calculations it achieves speedups of 65 and 81 for black oil and EOS simulations, respectively on the CRAY C-90

  3. Patterns For Parallel Programming

    CERN Document Server

    Mattson, Timothy G; Massingill, Berna L

    2005-01-01

    From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

  4. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  5. Proceedings of the 1988 IEEE international conference on robotics and automation. Volume 1

    International Nuclear Information System (INIS)

    These proceedings compile the papers presented at the international conference (1988) sponsored by IEEE Council on ''Robotics and Automation''. The subjects discussed were: automation and robots of nuclear power stations; algorithms of multiprocessors; parallel processing and computer architecture; and U.S. DOE research programs on nuclear power plants

  6. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  7. Automatic Performance Debugging of SPMD Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Jianfeng; Tu, Bibo; Meng, Dan

    2010-01-01

    Automatic performance debugging of parallel applications usually involves two steps: automatic detection of performance bottlenecks and uncovering their root causes for performance optimization. Previous work fails to resolve this challenging issue in several ways: first, several previous efforts automate analysis processes, but present the results in a confined way that only identifies performance problems with apriori knowledge; second, several tools take exploratory or confirmatory data analysis to automatically discover relevant performance data relationships. However, these efforts do not focus on locating performance bottlenecks or uncovering their root causes. In this paper, we design and implement an innovative system, AutoAnalyzer, to automatically debug the performance problems of single program multi-data (SPMD) parallel programs. Our system is unique in terms of two dimensions: first, without any apriori knowledge, we automatically locate bottlenecks and uncover their root causes for performance o...

  8. Library Automation Style Guide.

    Science.gov (United States)

    Gaylord Bros., Liverpool, NY.

    This library automation style guide lists specific terms and names often used in the library automation industry. The terms and/or acronyms are listed alphabetically and each is followed by a brief definition. The guide refers to the "Chicago Manual of Style" for general rules, and a notes section is included for the convenience of individual…

  9. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and support

  10. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  11. Automation of Hubble Space Telescope Mission Operations

    Science.gov (United States)

    Burley, Richard; Goulet, Gregory; Slater, Mark; Huey, William; Bassford, Lynn; Dunham, Larry

    2012-01-01

    On June 13, 2011, after more than 21 years, 115 thousand orbits, and nearly 1 million exposures taken, the operation of the Hubble Space Telescope successfully transitioned from 24x7x365 staffing to 815 staffing. This required the automation of routine mission operations including telemetry and forward link acquisition, data dumping and solid-state recorder management, stored command loading, and health and safety monitoring of both the observatory and the HST Ground System. These changes were driven by budget reductions, and required ground system and onboard spacecraft enhancements across the entire operations spectrum, from planning and scheduling systems to payload flight software. Changes in personnel and staffing were required in order to adapt to the new roles and responsibilities required in the new automated operations era. This paper will provide a high level overview of the obstacles to automating nominal HST mission operations, both technical and cultural, and how those obstacles were overcome.

  12. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  13. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  14. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  15. Advances in inspection automation

    Science.gov (United States)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  16. Compositional C++: Compositional Parallel Programming

    OpenAIRE

    Chandy, K. Mani; Kesselman, Carl

    1992-01-01

    A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms; imperative and declarative programm...

  17. Parallel nearest neighbor calculations

    Science.gov (United States)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  18. Detection Of Control Flow Errors In Parallel Programs At Compile Time

    Directory of Open Access Journals (Sweden)

    Bruce P. Lester

    2010-12-01

    Full Text Available This paper describes a general technique to identify control flow errors in parallel programs, which can be automated into a compiler. The compiler builds a system of linear equations that describes the global control flow of the whole program. Solving these equations using standard techniques of linear algebra can locate a wide range of control flow bugs at compile time. This paper also describes an implementation of this control flow analysis technique in a prototype compiler for a well-known parallel programming language. In contrast to previous research in automated parallel program analysis, our technique is efficient for large programs, and does not limit the range of language features.

  19. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  20. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  1. Parallelism and array processing

    International Nuclear Information System (INIS)

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  2. Parallels with nature

    Science.gov (United States)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  3. Balanço de água por aquisição automática de dados em cultura de trigo (Triticum aestivum L. The daily water consumption of a wheat culture using atmospheric and soil data

    Directory of Open Access Journals (Sweden)

    Celso Luiz Prevedello

    2007-02-01

    Full Text Available Utilizando técnica de aquisição automática de dados atmosféricos e de teor de água do solo, este trabalho quantificou o consumo diário de água em cultura do trigo em Latossolo Vermelho do município de Ponta Grossa, Estado do Paraná, durante o período de agosto a dezembro de 2003, procurando dar ênfase à contribuição das chuvas e dos fluxos ascendentes de água das camadas mais profundas do solo nesse consumo. Os resultados mostraram que no período monitorado: (a a lâmina média diária de água evapotranspirada pela cultura do trigo foi de 6,75 mm, com o fluxo ascendente de água no perfil de solo contribuindo com 62 % desse total; (b as taxas de evapotranspiração estimadas pelo método de Penman e pela equação do balanço hídrico (pedológico se transladaram no tempo com simetria aproximadamente igual, mas com defasagem aproximada de sete dias, como se o solo respondesse às variações impostas pela atmosfera cerca de uma semana depois; (c as chuvas tiveram efeito importante no armazenamento de água no solo, contribuindo para elevação das taxas evapotranspirativas; e (d pelo fato de o potencial mátrico médio na zona das raízes ter-se apresentado próximo do limite crítico para a cultura, concluiu-se que a irrigação poderia produzir impactos potencialmente positivos para a cultura, por disponibilizar mais água no solo e garantir níveis evapotranspirativos mais altos, como é agronomicamente desejável.The daily water consumption of a wheat culture was quantified on an Oxisoil using atmospheric and soil data measured automatically on an experimental farm in Ponta Grossa, Paraná, Brazil. The measurement period was August through December 2003. The rain contribution to soil moisture and the vertical upward movement of water within the soil were particularly emphasized. Our results show that in the evaluated period (a wheat evapotranspiration amounted to 6.75 mm a day, to which upward water flux contributed with 62

  4. Skeletal parallel programming

    OpenAIRE

    Saez, Fernando; Printista, Alicia Marcela; Piccoli, María Fabiana

    2007-01-01

    In the last time the high-performance programming community has worked to look for new templates or skeletons for several parallel programming paradigms. This new form of programming allows to programmer to reduce the time of development, since it saves time in the phase of design, testing and codification. We are concerned in some issues of skeletons that are fundamental to the definition of any skeletal parallel programming system. This paper present commentaries about these issues in the c...

  5. Introduction to parallel computing

    CERN Document Server

    2003-01-01

    Introduction to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards. It is the only book to have complete coverage of traditional Computer Science algorithms (sorting, graph and matrix algorithms), scientific computing algorithms (FFT, sparse matrix computations, N-body methods), and data intensive algorithms (search, dynamic programming, data-mining).

  6. Parallelization by Simulated Tunneling

    OpenAIRE

    Waterland, Amos; Appavoo, Jonathan; Seltzer, Margo I.

    2012-01-01

    As highly parallel heterogeneous computers become commonplace, automatic parallelization of software is an increasingly critical unsolved problem. Continued progress on this problem will require large quantities of information about the runtime structure of sequential programs to be stored and reasoned about. Manually formalizing all this information through traditional approaches, which rely on semantic analysis at the language or instruction level, has historically proved challenging. We ta...

  7. The Parallel C Preprocessor

    OpenAIRE

    Eugene D. Brooks III; Gorda, Brent C.; Karen H. Warren

    1992-01-01

    We describe a parallel extension of the C programming language designed for multiprocessors that provide a facility for sharing memory between processors. The programming model was initially developed on conventional shared memory machines with small processor counts such as the Sequent Balance and Alliant FX/8, but has more recently been used on a scalable massively parallel machine, the BBN TC2000. The programming model is split-join rather than fork-join. Concurrency is exploited to use a ...

  8. Hetrogenous Parallel Computing

    OpenAIRE

    Feng, Wu

    2013-01-01

    With processor core counts doubling every 18-24 months and penetrating all markets from high-end servers in supercomputers to desktops and laptops down to even mobile phones, we sit at the dawn of a world of ubiquitous parallelism, one where extracting performance via parallelism is paramount. That is, the "free lunch" to better performance, where programmers could rely on substantial increases in single-threaded performance to improve software, is over. The burden falls on developers to expl...

  9. Combining parallel search and parallel consistency in constraint programming

    OpenAIRE

    Rolf, Carl Christian; Kuchcinski, Krzysztof

    2010-01-01

    Program parallelization becomes increasingly important when new multi-core architectures provide ways to improve performance. One of the greatest challenges of this development lies in programming parallel applications. Declarative languages, such as constraint programming, can make the transition to parallelism easier by hiding the parallelization details in a framework. Automatic parallelization in constraint programming has mostly focused on parallel search. While search and consist...

  10. Continuous parallel coordinates.

    Science.gov (United States)

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  11. Improvement of Test Automation

    OpenAIRE

    Räsänen, Timo

    2013-01-01

    The purpose for this study was to find out how to ensure that the automated testing of MME in the Network Verification will continue smooth and reliable while using the in-house developed test automation framework. The goal of this thesis was to reveal the reasons of the currently challenging situation and to find the key elements to be improved in the MME testing carried by the test automation. Also a reason for the study was to get solutions as to how to change the current procedures and wa...

  12. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  13. A centralized global automation group in a decentralized organization

    OpenAIRE

    Jeffrey Veitch; Judy Hinderliter-Smith; James Ormand; Jimmy Bruner; Larry Birkemo

    2000-01-01

    In the latter part of the 1990s, many companies have worked to foster a ‘matrix’ style culture through several changes in organizational structure. This type of culture facilitates communication and development of new technology across organizational and global boundaries. At Glaxo Wellcome, this matrix culture is reflected in an automation strategy that relies on both centralized and decentralized resources. The Group Development Operations Information Systems Robotics Team is a centralized ...

  14. A New Era for Cytogenetics Laboratories: Automated Specimen Preparation

    OpenAIRE

    Shaunnessey, M.S.; Martin, A.O.; Sabrin, H.W.; Cimino, M.C.; Rissman, A

    1981-01-01

    The current capacity of clinical cytogenetics laboratories is limited by the labor intensiveness of the process. Specimen preparation for analysis consists of several steps: culture initiation, culture “harvest” (transfer of cells in culture to microscope slides), and staining. Steps in the analysis include cell location and selection, counting, and examination of chromosomes. In this report we will present preliminary results of evaluations and development of a Computer Automated Specimen Pr...

  15. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  16. I-94 Automation FAQs

    Data.gov (United States)

    Department of Homeland Security — In order to increase efficiency, reduce operating costs and streamline the admissions process, U.S. Customs and Border Protection has automated Form I-94 at air and...

  17. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  18. Hydrometeorological Automated Data System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Office of Hydrologic Development of the National Weather Service operates HADS, the Hydrometeorological Automated Data System. This data set contains the last...

  19. An automated Certification Authority

    CERN Document Server

    Shamardin, L V

    2002-01-01

    This note describe an approach to building an automated Certification Authority. It is compatible with basic requirements of RFC2527. It also supports Registration Authorities and Globus Toolkit grid-cert-renew automatic certificate renewal.

  20. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  1. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  2. Automating Supplier Selection Procedures

    OpenAIRE

    Davidrajuh, Reggie

    2001-01-01

    This dissertation describes a methodology, tools, and implementation techniques of automating supplier selection procedures of a small and medium-sized agile virtual enterprise. Firstly, a modeling approach is devised that can be used to model the supplier selection procedures of an enterprise. This modeling approach divides the supplier selection procedures broadly into three stages, the pre-selection, selection, and post-selection stages. Secondly, a methodology is presented for automating ...

  3. Taiwan Automated Telescope Network

    OpenAIRE

    Shuhrat Ehgamberdiev; Alexander Serebryanskiy; Antonio Jimenez; Li-Han Wang; Ming-Tsung Sun; Javier Fernandez Fernandez; Dean-Yi Chou

    2010-01-01

    A global network of small automated telescopes, the Taiwan Automated Telescope (TAT) network, dedicated to photometric measurements of stellar pulsations, is under construction. Two telescopes have been installed in Teide Observatory, Tenerife, Spain and Maidanak Observatory, Uzbekistan. The third telescope will be installed at Mauna Loa Observatory, Hawaii, USA. Each system uses a 9-cm Maksutov-type telescope. The effective focal length is 225 cm, corresponding to an f-ratio of 25. The field...

  4. Automated Lattice Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  5. Automated functional software testing

    OpenAIRE

    Jelnikar, Kristina

    2009-01-01

    The following work describes an approach to software test automation of functional testing. In the introductory part we are introducing what testing problems development companies are facing. The second chapter describes some testing methods, what role does testing have in software development, some approaches to software development and the meaning of testing environment. Chapter 3 is all about test automation. After a brief historical presentation, we are demonstrating through s...

  6. Instant Sikuli test automation

    CERN Document Server

    Lau, Ben

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A concise guide written in an easy-to follow style using the Starter guide approach.This book is aimed at automation and testing professionals who want to use Sikuli to automate GUI. Some Python programming experience is assumed.

  7. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    International Nuclear Information System (INIS)

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes

  8. Automated macromolecular crystal detection system and method

    Science.gov (United States)

    Christian, Allen T.; Segelke, Brent; Rupp, Bernard; Toppani, Dominique

    2007-06-05

    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  9. Parallel Magnetic Resonance Imaging

    CERN Document Server

    Uecker, Martin

    2015-01-01

    The main disadvantage of Magnetic Resonance Imaging (MRI) are its long scan times and, in consequence, its sensitivity to motion. Exploiting the complementary information from multiple receive coils, parallel imaging is able to recover images from under-sampled k-space data and to accelerate the measurement. Because parallel magnetic resonance imaging can be used to accelerate basically any imaging sequence it has many important applications. Parallel imaging brought a fundamental shift in image reconstruction: Image reconstruction changed from a simple direct Fourier transform to the solution of an ill-conditioned inverse problem. This work gives an overview of image reconstruction from the perspective of inverse problems. After introducing basic concepts such as regularization, discretization, and iterative reconstruction, advanced topics are discussed including algorithms for auto-calibration, the connection to approximation theory, and the combination with compressed sensing.

  10. SPINning parallel systems software

    International Nuclear Information System (INIS)

    We describe our experiences in using Spin to verify parts of the Multi Purpose Daemon (MPD) parallel process management system. MPD is a distributed collection of processes connected by Unix network sockets. MPD is dynamic processes and connections among them are created and destroyed as MPD is initialized, runs user processes, recovers from faults, and terminates. This dynamic nature is easily expressible in the Spin/Promela framework but poses performance and scalability challenges. We present here the results of expressing some of the parallel algorithms of MPD and executing both simulation and verification runs with Spin

  11. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  12. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  13. Highly parallel computation

    Science.gov (United States)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Among the highly parallel computing architectures required for advanced scientific computation, those designated 'MIMD' and 'SIMD' have yielded the best results to date. The present development status evaluation of such architectures shown neither to have attained a decisive advantage in most near-homogeneous problems' treatment; in the cases of problems involving numerous dissimilar parts, however, such currently speculative architectures as 'neural networks' or 'data flow' machines may be entailed. Data flow computers are the most practical form of MIMD fine-grained parallel computers yet conceived; they automatically solve the problem of assigning virtual processors to the real processors in the machine.

  14. ADAPTATION OF PARALLEL VIRTUAL MACHINES MECHANISMS TO PARALLEL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Zafer DEMİR

    2001-02-01

    Full Text Available In this study, at first, Parallel Virtual Machine is reviewed. Since It is based upon parallel processing, it is similar to parallel systems in principle in terms of architecture. Parallel Virtual Machine is neither an operating system nor a programming language. It is a specific software tool that supports heterogeneous parallel systems. However, it takes advantage of the features of both to make users close to parallel systems. Since tasks can be executed in parallel on parallel systems by Parallel Virtual Machine, there is an important similarity between PVM and distributed systems and multiple processors. In this study, the relations in question are examined by making use of Master-Slave programming technique. In conclusion, the PVM is tested with a simple factorial computation on a distributed system to observe its adaptation to parallel architects.

  15. A Comparative Study on Serial and Parallel Web Content Mining

    Directory of Open Access Journals (Sweden)

    Binayak Panda

    2016-03-01

    Full Text Available World Wide Web (WWW is such a repository which serves every individuals need starting with the context of education to entertainment etc. But from users point of view getting relevant information with respect to one particular context is time consuming and also not so easy. It is because of the volume of data which is unstructured, distributed and dynamic in nature. There can be automation to extract relevant information with respect to one particular context, which is named as Web Content Mining. The efficiency of automation depends on validity of expected outcome as well as amount of processing time. The acceptability of outcome depends on user or user’s policy. But the amount of processing time depends on the methodology of Web Content Mining. In this work a study has been carried out between Serial Web Content Mining and Parallel Web Content Mining. This work also focuses on the frame work of implementation of parallelism in Web Content Mining.

  16. A MapReduce based Parallel SVM for Email Classification

    Directory of Open Access Journals (Sweden)

    Ke Xu

    2014-06-01

    Full Text Available Support Vector Machine (SVM is a powerful classification and regression tool. Varying approaches including SVM based techniques are proposed for email classification. Automated email classification according to messages or user-specific folders and information extraction from chronologically ordered email streams have become interesting areas in text machine learning research. This paper presents a parallel SVM based on MapReduce (PSMR algorithm for email classification. We discuss the challenges that arise from differences between email foldering and traditional document classification. We show experimental results from an array of automated classification methods and evaluation methodologies, including Naive Bayes, SVM and PSMR method of foldering results on the Enron datasets based on the timeline. By distributing, processing and optimizing the subsets of the training data across multiple participating nodes, the parallel SVM based on MapReduce algorithm reduces the training time significantly

  17. Parallel programming with PCN

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  18. Optical parallel selectionist systems

    Science.gov (United States)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  19. Parallel plate detectors

    International Nuclear Information System (INIS)

    A 5x3cm2 (timing only) and a 15x5cm2 (timing and position) parallel plate avalanche counters (PPAC) are considered. The theory of operation and timing resolution is given. The measurement set-up and the curves of experimental results illustrate the possibilities of the two counters

  20. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  1. Applied parallel computing

    CERN Document Server

    Deng, Yuefan

    2012-01-01

    The book provides a practical guide to computational scientists and engineers to help advance their research by exploiting the superpower of supercomputers with many processors and complex networks. This book focuses on the design and analysis of basic parallel algorithms, the key components for composing larger packages for a wide range of applications.

  2. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  3. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  4. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  5. NAS Parallel Benchmarks Results

    Science.gov (United States)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  6. Optimizing parallel reduction operations

    Energy Technology Data Exchange (ETDEWEB)

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  7. Economical parallel oligonucleotide and peptide synthesizer - PET OLIGATOR

    Czech Academy of Sciences Publication Activity Database

    Lebl, M.; Pistek, Ch.; Hachmann, J.; Mudra, Petr; Pešek, Václav; Pokorný, Vít; Poncar, Pavel; Ženíšek, Karel

    2007-01-01

    Roč. 13, 1/2 (2007), s. 367-375. ISSN 1573-3149 Grant ostatní: NIH SBIR(US) R43 GM61511-01; NIH SBIR(US) R43 GM58981-01 Institutional research plan: CEZ:AV0Z40550506 Keywords : automated synthesizer * centrifugation * parallel synthesis Subject RIV: CC - Organic Chemistry Impact factor: 0.971, year: 2007

  8. A MapReduce based Parallel SVM for Email Classification

    OpenAIRE

    Ke Xu; Cui Wen; Qiong Yuan; Xiangzhu He; Jun Tie

    2014-01-01

    Support Vector Machine (SVM) is a powerful classification and regression tool. Varying approaches including SVM based techniques are proposed for email classification. Automated email classification according to messages or user-specific folders and information extraction from chronologically ordered email streams have become interesting areas in text machine learning research. This paper presents a parallel SVM based on MapReduce (PSMR) algorithm for email classification. We discuss the chal...

  9. Trapping Parallel Port to Operate 220V Appliances

    OpenAIRE

    Prateek Sharma; Kapil Kumar; Ajay Kumar Singh

    2012-01-01

    With advancement of technology things arebecoming simpler and easier for us. Automation is the use ofTechnology to reduce Human work. Automatic systems are beingpreferred over manual system. Internet controlling offers a newapproach to control electric appliances from a remote terminal,using the Internet, Bluetooth and Local Area Networkconnection. This system is accomplished by personal computers,parallel port, local area network connection, internet connection,mobile phone and Bluetooth dev...

  10. Automated Camera Calibration

    Science.gov (United States)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  11. Automated telescope scheduling

    Science.gov (United States)

    Johnston, Mark D.

    1988-08-01

    With the ever increasing level of automation of astronomical telescopes the benefits and feasibility of automated planning and scheduling are becoming more apparent. Improved efficiency and increased overall telescope utilization are the most obvious goals. Automated scheduling at some level has been done for several satellite observatories, but the requirements on these systems were much less stringent than on modern ground or satellite observatories. The scheduling problem is particularly acute for Hubble Space Telescope: virtually all observations must be planned in excruciating detail weeks to months in advance. Space Telescope Science Institute has recently made significant progress on the scheduling problem by exploiting state-of-the-art artificial intelligence software technology. What is especially interesting is that this effort has already yielded software that is well suited to scheduling groundbased telescopes, including the problem of optimizing the coordinated scheduling of more than one telescope.

  12. To Parallelize or Not to Parallelize, Speed Up Issue

    Directory of Open Access Journals (Sweden)

    Alaa Ismail El-Nashar

    2011-03-01

    Full Text Available Running parallel applications requires special and expensive processing resources to obtain the requiredresults within a reasonable time. Before parallelizing serial applications, some analysis is recommendedto be carried out to decide whether it will benefit from parallelization or not. In this paper we discuss theissue of speed up gained from parallelization using Message Passing Interface (MPI to compromisebetween the overhead of parallelization cost and the gained parallel speed up. We also propose anexperimental method to predict the speed up of MPI applications.

  13. Cultural Resources, RecreationAreasESRI-This data set represents the recreational areas found in Utah, including campgrounds, golf courses and ski resorts., Published in 2001, Smaller than 1:100000 scale, State of Utah Automated Geographic Reference Center.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cultural Resources dataset, published at Smaller than 1:100000 scale, was produced all or in part from Published Reports/Deeds information as of 2001. It is...

  14. Myths in test automation

    OpenAIRE

    Jazmine Francis

    2015-01-01

    Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear script...

  15. Automated phantom assay system

    International Nuclear Information System (INIS)

    This paper describes an automated phantom assay system developed for assaying phantoms spiked with minute quantities of radionuclides. The system includes a computer-controlled linear-translation table that positions the phantom at exact distances from a spectrometer. A multichannel analyzer (MCA) interfaces with a computer to collect gamma spectral data. Signals transmitted between the controller and MCA synchronize data collection and phantom positioning. Measured data are then stored on disk for subsequent analysis. The automated system allows continuous unattended operation and ensures reproducible results

  16. Parallel multilevel preconditioners

    Energy Technology Data Exchange (ETDEWEB)

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  17. Parallel clustering with CFinder

    CERN Document Server

    Pollner, Peter; Vicsek, Tamas; 10.1142/S0129626412400014

    2012-01-01

    The amount of available data about complex systems is increasing every year, measurements of larger and larger systems are collected and recorded. A natural representation of such data is given by networks, whose size is following the size of the original system. The current trend of multiple cores in computing infrastructures call for a parallel reimplementation of earlier methods. Here we present the grid version of CFinder, which can locate overlapping communities in directed, weighted or undirected networks based on the clique percolation method (CPM). We show that the computation of the communities can be distributed among several CPU-s or computers. Although switching to the parallel version not necessarily leads to gain in computing time, it definitely makes the community structure of extremely large networks accessible.

  18. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  19. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  20. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  1. TAO par syntaxeur parallele

    OpenAIRE

    Diaz, C.

    1990-01-01

    In remote control, the master element which the user operates looks for practical and historical reasons like the slave arm and therefore features a series architecture, with a few drawbacks in terms of mass, dimensions, rigidity and mechanical complexity. To remedy these defects, we are now introducing a new master element with parallel kinematics. This syntactor, derived from Steward's manipulators, has six degrees of freedom and comprises six motor-driven links arranged on a fixed plate (t...

  2. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  3. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  4. Parallel computing techniques

    OpenAIRE

    Nakano, Junji

    2004-01-01

    Parallel computing means to divide a job into several tasks and use more than one processor simultaneously to perform these tasks. Assume you have developed a new estimation method for the parameters of a complicated statistical model. After you prove the asymptotic characteristics of the method (for instance, asymptotic distribution of the estimator), you wish to perform many simulations to assure the goodness of the method for reasonable numbers of data values and for different values of pa...

  5. Parallel Computation Is ESS

    OpenAIRE

    Mondal, Nabarun; Ghosh, Partha P.

    2013-01-01

    There are enormous amount of examples of Computation in nature, exemplified across multiple species in biology. One crucial aim for these computations across all life forms their ability to learn and thereby increase the chance of their survival. In the current paper a formal definition of autonomous learning is proposed. From that definition we establish a Turing Machine model for learning, where rule tables can be added or deleted, but can not be modified. Sequential and parallel implementa...

  6. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  7. Controlled Fuzzy Parallel Rewriting

    OpenAIRE

    Asveld, Peter R.J.

    1996-01-01

    We study a Lindenmayer-like parallel rewriting system to model the growth of filaments (arrays of cells) in which developmental errors may occur. In essence this model is the fuzzy analogue of the derivation-controlled iteration grammar. Under minor assumptions on the family of control languages and on the family of fuzzy languages in the underlying iteration grammar, we show that (i) regular control does not provide additional generating power to the model, (ii) the number of fuzzy substitut...

  8. Controlled Fuzzy Parallel Rewriting

    OpenAIRE

    Asveld, Peter R.J.; Paun, G.; Salomaa, A

    1997-01-01

    We study a Lindenmayer-like parallel rewriting system to model the growth of filaments (arrays of cells) in which developmental errors may occur. In essence this model is the fuzzy analogue of the derivation-controlled iteration grammar. Under minor assumptions on the family of control languages and on the family of fuzzy languages in the underlying iteration grammar, we show (i) regular control does not provide additional generating power to the model, (ii) the number of fuzzy substitutions ...

  9. Parallel programming with MPI

    International Nuclear Information System (INIS)

    MPI is a practical, portable, efficient and flexible standard for message passing, which has been implemented on most MPPs and network of workstations by machine vendors, universities and national laboratories. MPI avoids specifying how operations will take place and superfluous work to achieve efficiency as well as portability, and is also designed to encourage overlapping communication and computation to hide communication latencies. This presentation briefly explains the MPI standard, and comments on efficient parallel programming to improve performance. (author)

  10. Automated conflict resolution issues

    Science.gov (United States)

    Wike, Jeffrey S.

    1991-01-01

    A discussion is presented of how conflicts for Space Network resources should be resolved in the ATDRSS era. The following topics are presented: a description of how resource conflicts are currently resolved; a description of issues associated with automated conflict resolution; present conflict resolution strategies; and topics for further discussion.

  11. Protokoller til Home Automation

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk

    2008-01-01

    computer, der kan skifte mellem foruddefinerede indstillinger. Nogle gange kan computeren fjernstyres over internettet, så man kan se hjemmets status fra en computer eller måske endda fra en mobiltelefon. Mens nævnte anvendelser er klassiske indenfor home automation, er yderligere funktionalitet dukket op...

  12. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2015-01-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  13. Automated data model evaluation

    International Nuclear Information System (INIS)

    Modeling process is essential phase within information systems development and implementation. This paper presents methods and techniques for analysis and evaluation of data model correctness. Recent methodologies and development results regarding automation of the process of model correctness analysis and relations with ontology tools has been presented. Key words: Database modeling, Data model correctness, Evaluation

  14. Automated solvent concentrator

    Science.gov (United States)

    Griffith, J. S.; Stuart, J. L.

    1976-01-01

    Designed for automated drug identification system (AUDRI), device increases concentration by 100. Sample is first filtered, removing particulate contaminants and reducing water content of sample. Sample is extracted from filtered residue by specific solvent. Concentrator provides input material to analysis subsystem.

  15. ELECTROPNEUMATIC AUTOMATION EDUCATIONAL LABORATORY

    OpenAIRE

    Dolgorukov, S. O.; National Aviation University; Roman, B. V.; National Aviation University

    2013-01-01

    The article reflects current situation in education regarding mechatronics learning difficulties. Com-plex of laboratory test benches on electropneumatic automation are considered as a tool in advancing through technical science. Course of laboratory works developed to meet the requirement of efficient and reliable way of practical skills acquisition is regarded the simplest way for students to learn the ba-sics of mechatronics.

  16. Automating spectral measurements

    Science.gov (United States)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  17. El general de brigada es um tipo de caramelo – tradução automática e aprendizagem cultural.
    DOI: 10.5007/2175-7968.2011v1n27p243

    OpenAIRE

    Nylcea Thereza de Siqueira Pedra; Ruth Bohunovsky

    2011-01-01

    O presente artigo tem por objetivo averiguar o papel assumido pela tradução (automática) no ensino/aprendizagem de línguas estrangeiras. Para tanto, discute-se, primeiramente, diferentes funções didáticas atribuídas à tradução. Entre elas, destaca-se a que objetiva a “aprendizagem cultural”, visando à sensibilização dos aprendizes para aspectos culturais relacionados à língua. Recorrendo a alguns teóricos, problematiza-se o uso de conceitos como “cultura”, “língua” e “tradução” e encontra-se ...

  18. Parallel Programming with Declarative Ada

    OpenAIRE

    Thornley, John

    1993-01-01

    Declarative programming languages (e.g., functional and logic programming languages) are semantically elegant and implicitly express parallelism at a high level. We show how a parallel declarative language can be based on a modern structured imperative language with single-assignment variables. Such a language combines the advantages of parallel declarative programming with the strengths and familiarity of the underlying imperative language. We introduce Declarative Ada, a parallel declarativ...

  19. Integrating Task and Data Parallelism

    OpenAIRE

    Massingill, Berna

    1993-01-01

    Many models of concurrency and concurrent programming have been proposed; most can be categorized as either task-parallel (based on functional decomposition) or data-parallel (based on data decomposition). Task-parallel models are most effective for expressing irregular computations; data-parallel models are most effective for expressing regular computations. Some computations, however, exhibit both regular and irregular aspects. For such computations, a better programming model is one that i...

  20. Parallel processing approaches in robotics

    OpenAIRE

    Henrich, Dominik; Höniger, Thomas

    1997-01-01

    This paper presents the different possibilities for parallel processing in robot control architectures. At the beginning, we shortly review the historic development of control architectures. Then, a list of requirements for control architectures is set up from a parallel processing point of view. As our main topic, we identify the levels of parallel processing in robot control architectures. With each level of parallelism, examples for a typical robot control architecture are presented. Final...

  1. Parallel Repetition From Fortification

    OpenAIRE

    Moshkovitz Aaronson, Dana Hadar

    2014-01-01

    The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games – “fortification” – and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short. As corollaries, we obtain: (1) Starting from...

  2. Matlab in Parallel

    Czech Academy of Sciences Publication Activity Database

    Jakl, Ondřej; Musil, Tomáš

    Ostrava: VŠB-TU Ostrava, 2007 - (Doležalová, J.), s. 100-104 ISBN 978-80-248-1649-4. [Moderní matematické metody v inženýrství. Dolní Lomná (CZ), 04.06.2007-06.06.2007] R&D Projects: GA AV ČR IBS3086102 Institutional research plan: CEZ:AV0Z30860518; CEZ:AV0Z20760514 Keywords : Matlab * parallel processing * nonlinear dynamics of rotors Subject RIV: IN - Informatics, Computer Science

  3. Object-Oriented Parallel Programming

    OpenAIRE

    Givelberg, Edward

    2014-01-01

    We introduce an object-oriented framework for parallel programming, which is based on the observation that programming objects can be naturally interpreted as processes. A parallel program consists of a collection of persistent processes that communicate by executing remote methods. We discuss code parallelization and process persistence, and explain the main ideas in the context of computations with very large data objects.

  4. 21 CFR 866.2170 - Automated colony counter.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated colony counter. 866.2170 Section 866.2170 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... purposes to determine the number of bacterial colonies present on a bacteriological culture...

  5. 21 CFR 866.2850 - Automated zone reader.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Automated zone reader. 866.2850 Section 866.2850 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... surface of certain culture media used in disc-agar diffusion antimicrobial susceptibility tests....

  6. Effective Manufacturing Method for Automated Inside Diameter Grinding

    Science.gov (United States)

    Slowinski, Bronislaw; Nadolny, Krzysztof

    This paper presents essence and results of experimental investigations of highly efficient automated internal cylindrical grinding method. The essence of this method consists in the removal of the whole grinding allowance in one pass of a grinding wheel, parallel to preserving the required quality of the surface layer of a workpiece. A grinding wheel applied to the developed method had a zonal diversified internal structure and a properly prepared conical chamfer.

  7. Tolerant (parallel) Programming

    Science.gov (United States)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  8. Theory of Parallel Mechanisms

    CERN Document Server

    Huang, Zhen; Ding, Huafeng

    2013-01-01

    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  9. Massively Parallel QCD

    Energy Technology Data Exchange (ETDEWEB)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  10. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  11. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.;

    2015-01-01

    -parallel TFO strand was modified with Y with one or two insertions at the end of the TFO strand, the thermal stability was increased 1.2 °C and 3 °C at pH 7.2, respectively, whereas one insertion in the middle of the TFO strand decreased the thermal stability 1.4 °C compared to the wild type oligonucleotide......-1-yl chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm = -9.1 °C). Also the thermal stability...... decreased about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain...

  12. Massively Parallel QCD

    International Nuclear Information System (INIS)

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  13. Automated Preferences Elicitation

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Guy, Tatiana Valentine

    Prague : Institute of Information Theory and Automation, 2011, s. 20-25. ISBN 978-80-903834-6-3. [The 2nd International Workshop od Decision Making with Multiple Imperfect Decision Makers. Held in Conjunction with the 25th Annual Conference on Neural Information Processing Systems (NIPS 2011). Sierra Nevada (ES), 16.12.2011-16.12.2011] R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0567 Institutional research plan: CEZ:AV0Z10750506 Keywords : elicitation * decision making * Bayesian decision making * fully probabilistic design Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2011/AS/karny-automated preferences elicitation.pdf

  14. Automated drawing generation system

    International Nuclear Information System (INIS)

    Since automated CAD drawing generation systems still require human intervention, improvements were focussed on an interactive processing section (data input and correcting operation) which necessitates a vast amount of work. As a result, human intervention was eliminated, the original objective of a computerized system. This is the first step taken towards complete automation. The effects of development and commercialization of the system are as described below. (1) The interactive processing time required for generating drawings was improved. It was determined that introduction of the CAD system has reduced the time required for generating drawings. (2) The difference in skills between workers preparing drawings has been eliminated and the quality of drawings has been made uniform. (3) The extent of knowledge and experience demanded of workers has been reduced. (author)

  15. Terminal automation system maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Coffelt, D.; Hewitt, J. [Engineered Systems Inc., Tempe, AZ (United States)

    1997-01-01

    Nothing has improved petroleum product loading in recent years more than terminal automation systems. The presence of terminal automation systems (TAS) at loading racks has increased operational efficiency and safety and enhanced their accounting and management capabilities. However, like all finite systems, they occasionally malfunction or fail. Proper servicing and maintenance can minimize this. And in the unlikely event a TAS breakdown does occur, prompt and effective troubleshooting can reduce its impact on terminal productivity. To accommodate around-the-clock loading at racks, increasingly unattended by terminal personnel, TAS maintenance, servicing and troubleshooting has become increasingly demanding. It has also become increasingly important. After 15 years of trial and error at petroleum and petrochemical storage and transfer terminals, a number of successful troubleshooting programs have been developed. These include 24-hour {open_quotes}help hotlines,{close_quotes} internal (terminal company) and external (supplier) support staff, and {open_quotes}layered{close_quotes} support. These programs are described.

  16. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  17. Rapid automated nuclear chemistry

    International Nuclear Information System (INIS)

    Rapid Automated Nuclear Chemistry (RANC) can be thought of as the Z-separation of Neutron-rich Isotopes by Automated Methods. The range of RANC studies of fission and its products is large. In a sense, the studies can be categorized into various energy ranges from the highest where the fission process and particle emission are considered, to low energies where nuclear dynamics are being explored. This paper presents a table which gives examples of current research using RANC on fission and fission products. The remainder of this text is divided into three parts. The first contains a discussion of the chemical methods available for the fission product elements, the second describes the major techniques, and in the last section, examples of recent results are discussed as illustrations of the use of RANC

  18. Rapid automated nuclear chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, R.A.

    1979-05-31

    Rapid Automated Nuclear Chemistry (RANC) can be thought of as the Z-separation of Neutron-rich Isotopes by Automated Methods. The range of RANC studies of fission and its products is large. In a sense, the studies can be categorized into various energy ranges from the highest where the fission process and particle emission are considered, to low energies where nuclear dynamics are being explored. This paper presents a table which gives examples of current research using RANC on fission and fission products. The remainder of this text is divided into three parts. The first contains a discussion of the chemical methods available for the fission product elements, the second describes the major techniques, and in the last section, examples of recent results are discussed as illustrations of the use of RANC.

  19. Components for automated microscopy

    Science.gov (United States)

    Determann, H.; Hartmann, H.; Schade, K. H.; Stankewitz, H. W.

    1980-12-01

    A number of devices, aiming at automated analysis of microscopic objects as regards their morphometrical parameters or their photometrical values, were developed. These comprise: (1) a device for automatic focusing tuned on maximum contrast; (2) a feedback system for automatic optimization of microscope illumination; and (3) microscope lenses with adjustable pupil distances for usage in the two previous devices. An extensive test program on histological and zytological applications proves the wide application possibilities of the autofocusing device.

  20. Automation of dissolution tests

    OpenAIRE

    Rolf Rolli

    2003-01-01

    Dissolution testing of drug formulations was introduced in the 1960s and accepted by health regulatory authorities in the 1970s. Since then, the importance of dissolution has grown rapidly as have the number of tests and demands in quality-control laboratories. Recent research works lead to the development of in-vitro dissolution tests as replacements for human and animal bioequivalence studies. For many years, a lot of time and effort has been invested in automation of dissolution tests. The...

  1. Automated uranium assays

    International Nuclear Information System (INIS)

    Precise, timely inventories of enriched uranium stocks are vital to help prevent the loss, theft, or diversion of this material for illicit use. A wet-chemistry analyzer has been developed at LLL to assist in these inventories by performing automated analyses of uranium samples from different stages in the nuclear fuel cycle. These assays offer improved accuracy, reduced costs, significant savings in manpower, and lower radiation exposure for personnel compared with present techniques

  2. Construction Automation and Robotics

    OpenAIRE

    Bock, Thomas

    2008-01-01

    Due to the high complexity of the construction process and the stagnating technological development a long-term preparation is necessary to adapt it to advanced construction methods. Architects, engineers and all other participants of the construction process have to be integrated in this adaptation process. The short- and long-term development of automation will take place step-by-step and will be oriented to the respective application and requirements. In the initial phase existing building...

  3. Shielded cells transfer automation

    International Nuclear Information System (INIS)

    Nuclear waste from shielded cells is removed, packaged, and transferred manually in many nuclear facilities. To reduce radiation exposure to operators, technological advances in remote handling and automation were employed. An industrial robot and a specially designed end effector, access port, and sealing machine were used to remotely bag waste containers out of a glove box. The system is operated from a control panel outside the work area via television cameras

  4. LINAC control automation system

    International Nuclear Information System (INIS)

    A 7 MeV Electron Beam Linear Accelerator (LINAC) being used for pulse radiolysis experiments at RC and CDD, B.A.R.C. has been automated with a PLC based control panel designed and developed by Computer Division, B.A.R.C.. The control panel after power on switches ON various units in a pre-defined sequence and intervals on a single turn of START key from OFF to ON position. The control panel also generates various ramp signals in a pre-defined sequence and rate and steady values and feeds to the LINAC bringing it to the ready for experiment condition. Similarly on a single turn of STOP key from OFF to ON position, the panel ramps down the various signals in pre-defined manners and makes OFF the various units in predefined sequence and timing providing safety to the machine. The steady values for various signals are on line settable as and when required so. This automation system relieves the operator from fatigue of time consuming manual ramping up or down of various signals and running around in four rooms for switching ON or OFF the various units enhancing efficiency and safety. This also facilitates the user scientist to do start up and shutdown operation in the absence of skilled operators and thus adds flexibility for working up to extended timing. This unit has been working satisfactorily since August 2002. For extraordinary condition automation to manual or vice versa change over has been provided. (author)

  5. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement

  6. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  7. Parallel Polarization State Generation

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  8. Parallel Polarization State Generation

    CERN Document Server

    She, Alan

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristi...

  9. An Analysis of the Relationship of Confucian Thoughts to Chinese Traditional Culture with Western Culture Concerned

    Institute of Scientific and Technical Information of China (English)

    王宝

    2008-01-01

    By drawing support from the kernel theories of Confucian thoughts and the dominant characteristic of Chinese traditional culture,this article was to introduce how Confucian thoughts play a chief role in Chinese traditional culture.The study first analyzed the kernel theories of Confucian thoughts and the chief characteristic of Chinese traditional culture in two parallel lines.Then Chinese traditional culture was deliberated in culture patterns with Confucian thoughts and Western Culture concerned.

  10. Parallel Detection of Cathodoluminescence.

    Science.gov (United States)

    Day, John C. C.

    Available from UMI in association with The British Library. A GEC P8600 Charge-coupled device has been used in the design and fabrication of a parallel detection system or optical multichannel analyser for the analysis of Cathodoluminescence Spectra. The P8600, whilst designed for video applications, is used as a linear array by merging entire rows of pixels together on the on-board output amplifier. A dual slope integration method of correlated double sampling has been used for noise reduction. An analysis of the performance of this system is given and the achieved noise level of 22 electrons is found to be in good agreement with that theoretically possible. A complete description of the circuits is given together with details of its use with a "Link 860" computer/analyser and a "Philips 400" electron microscope. To demonstrate the system, a study of the cathodoluminescent properties of Cadmium Telluride grown by molecular beam epitaxy has been made. In particular the effect of dislocations, stacking faults and twins on luminescence has been studied. Dislocations are seen to cause a quenching of excitonic emission with no corresponding increase in any other emission. The effect of stacking faults was seen to vary between different samples with an enhancement of long wavelength emission seen in poor quality samples. This supports the premise that the faults are nucleated by surface impurities which are also responsible for the enhanced emission. Some twin defects have been found to cause enhanced excitonic emission. This is compatible with the existence of natural quantum wells at twin faults proposed by other workers. The speed with which the parallel detection system can acquire spectra makes it a valuable tool in the study of beam sensitive materials. To demonstrate this, measurements were made of the decay rates of the weak cathodoluminescence from the organic crystal Coronene. These rates were seen to have time constants less than two minutes and such

  11. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    OpenAIRE

    Loredana MOCEAN; Monica CEACA

    2009-01-01

    In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  12. Topic 7: parallel computer architecture and instruction level parallelism

    OpenAIRE

    Ayguadé Parra, Eduard; Wolfgang, Kark; De Bosschere, Koen; Francois, Jean Collard

    2006-01-01

    We welcome you to the two Parallel Computer Architecture and Instruction Level Parallelism sessions of Euro-Par 2006 conference being held in Dresden, Germany. The call for papers for this Euro-Par topic area sought papers on all hardware/software aspects of parallel computer architecture, processor architecture and microarchitecture. This year 12 papers were submitted to this topic area. Among the submissions, 5 papers were accepted as full papers for the conference (41% acceptance rate).

  13. To parallelize or not to parallelize, bugs issue

    OpenAIRE

    El-Nashar, Alaa I.; Masaki, Nakamura

    2013-01-01

    Program correctness is one of the most difficult challenges in parallel programming. Message Passing Interface MPI is widely used in writing parallel applications. Since MPI is not a compiled language, the programmer will be enfaced with several programming bugs.This paper presents the most common programming bugs arise in MPI programs to help the programmer to compromise between the advantage of parallelism and the extra effort needed to detect and fix such bugs. An algebraic specification o...

  14. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  15. Strategy, culture and innovation performance

    DEFF Research Database (Denmark)

    do Nascimento Gambi, Lillian; Boer, Harry

    2015-01-01

    Firms strive for improving their performance, and organizational culture has been recognized as an important driver of better performance. In parallel, strategy is viewed as an important contextual variable that influences organizational culture as well as performance. This study has two main goals......: (1) investigating the relationship between strategic practices and innovation performance, and (2) determining if strategy has a direct and/or an indirect, culture-mediated effect on innovation performance, and if this effect varies across strategic practices and culture profiles. The research model...... suitable cultural profile achieve the strongest performance effects....

  16. Automated Assessment, Face to Face

    OpenAIRE

    Rizik M. H. Al-Sayyed; Amjad Hudaib; Muhannad AL-Shboul; Yousef Majdalawi; Mohammed Bataineh

    2010-01-01

    This research paper evaluates the usability of automated exams and compares them with the paper-and-pencil traditional ones. It presents the results of a detailed study conducted at The University of Jordan (UoJ) that comprised students from 15 faculties. A set of 613 students were asked about their opinions concerning automated exams; and their opinions were deeply analyzed. The results indicate that most students reported that they are satisfied with using automated exams but they have sugg...

  17. Automation System Products and Research

    OpenAIRE

    Rintala, Mikko; Sormunen, Jussi; Kuisma, Petri; Rahkala, Matti

    2014-01-01

    Automation systems are used in most buildings nowadays. In the past they were mainly used in industry to control and monitor critical systems. During the past few decades the automation systems have become more common and are used today from big industrial solutions to homes of private customers. With the growing need for ecologic and cost-efficient management systems, home and building automation systems are becoming a standard way of controlling lighting, ventilation, heating etc. Auto...

  18. Software Testing and Documenting Automation

    OpenAIRE

    Tsybin, Anton; Lyadova, Lyudmila

    2008-01-01

    This article describes some approaches to problem of testing and documenting automation in information systems with graphical user interface. Combination of data mining methods and theory of finite state machines is used for testing automation. Automated creation of software documentation is based on using metadata in documented system. Metadata is built on graph model. Described approaches improve performance and quality of testing and documenting processes.

  19. Embedded system for building automation

    OpenAIRE

    Rolih, Andrej

    2014-01-01

    Home automation is a fast developing field of computer science and electronics. Companies are offering many different products for home automation. Ranging anywhere from complete systems for building management and control, to simple smart lights that can be connected to the internet. These products offer the user greater living comfort and lower their expenses by reducing the energy usage. This thesis shows the development of a simple home automation system that focuses mainly on the enhance...

  20. Evaluation of an automated method for urinocolture screening

    Directory of Open Access Journals (Sweden)

    Claudia Ballabio

    2010-09-01

    Full Text Available Introduction: Urinary tract infections are one of the most common diseases found in medical practice and are diagnosed with traditional methods of cultivation on plates. In this study we evaluated an automated instrumentation for screening of the urinocultures that can provide results quickly and guarantee traceability. The comparison of results obtained with automatic and plate methods is reported. Methods: 316 urine samples including midstream urine, urine catheter and urine bag have been analyzed by Alfred 60 (Alifax through light scattering technology that measures the replication of the bacteria. Simultaneously, the samples were sown on agar plates CPS3,Agar Cled, Mc Conkey Agar. Results:A total of 316 samples were analyzed by the automated method, 190 resulted negative, all confirmed by culture, while 126 were found positive. 82 cases were confirmed positive in culture plate, 65 with significant isolation of bacteria and 17 with polymicrobial flora with a significant charge. 44 cases were negative in culture plate but positive for the automated method. Conclusions: The absence of false negative results at low charges can represent a starting point to introduce an automated method for urinocolture screening.

  1. Parallelism through Digital Circuit Design

    OpenAIRE

    O'Donnell, John

    2008-01-01

    Two ways to exploit chips with a very large number of transistors are multicore processors and programmable logic chips. Some data parallel algorithms can be executed efficiently on ordinary parallel computers, including multicores. A class of data parallel algorithms is identified which have characteristics that make implementation on multiprocessors inefficient, but they are well suited for direct design as digital circuits. This leads to a programming model called c...

  2. Parallel adaptive finite state automata

    OpenAIRE

    Rocha, Ricardo L.; Garanhani, César E.C.

    2006-01-01

    The interest on parallelism has grown in many areas of technology. Hardware development has evolved greatly in the last years, leaving to software developers the goal of building better tools and compilers for parallel computation. Also, symbolic computation must take advantage of parallel computation. The proposal contained in this paper is to use functional languages as a tool to implement adaptive automata using the concepts of symbolic computation

  3. Parallel computing on heterogeneous networks

    CERN Document Server

    Lastovetsky, Alexey L

    2004-01-01

    New approaches to parallel computing are being developed that make better use of the heterogeneous cluster architecture. Provides a detailed introduction to parallel computing on heterogenous clusters. All concepts and algorithms are illustrated with working programs that can be compiled and executed on any cluster. The algorithms discussed have practical applications in a range of real-life parallel computing problems, such as the N-body problem, portfolio management, and the modeling of oil extraction.

  4. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  5. World-wide distribution automation systems

    Energy Technology Data Exchange (ETDEWEB)

    Devaney, T.M.

    1994-12-31

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

  6. AUTOMATED API TESTING APPROACH

    Directory of Open Access Journals (Sweden)

    SUNIL L. BANGARE

    2012-02-01

    Full Text Available Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. With the help of software testing we can verify or validate the software product. Normally testing will be done after development of software but we can perform the software testing at the time of development process also. This paper will give you a brief introduction about Automated API Testing Tool. This tool of testing will reduce lots of headache after the whole development of software. It saves time as well as money. Such type of testing is helpful in the Industries & Colleges also.

  7. Automated radioimmunoassay of nicotine

    International Nuclear Information System (INIS)

    The authors have developed an automated nonequilibrium procedure for the radioimmunoassay of nicotine. The use of a unique iodinated nicotine derivative in this procedure gave a sensitivity of 10 μg/l for nicotine with a between-run precision of 7.4% and within-run precision of 6.0%. Nicotine levels of 60 to 67 μg/l were found in subjects 15 min after smoking one standard cigarette. The technique herein reported is a very rapid, and sensitive radioimmunoassay for nicotine and facilitates the determination of nicotine in smoking subjects during the actual process of smoking. (Auth.)

  8. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...

  9. Mechatronic Design Automation

    DEFF Research Database (Denmark)

    Fan, Zhun

    This book proposes a novel design method that combines both genetic programming (GP) to automatically explore the open-ended design space and bond graphs (BG) to unify design representations of multi-domain Mechatronic systems. Results show that the method, formally called GPBG method, can...... successfully design analogue filters, vibration absorbers, micro-electro-mechanical systems, and vehicle suspension systems, all in an automatic or semi-automatic way. It also investigates the very important issue of co-designing plant-structures and dynamic controllers in automated design of Mechatronic...

  10. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  11. Parallel computation with the force

    Science.gov (United States)

    Jordan, H. F.

    1985-01-01

    A methodology, called the force, supports the construction of programs to be executed in parallel by a force of processes. The number of processes in the force is unspecified, but potentially very large. The force idea is embodied in a set of macros which produce multiproceossor FORTRAN code and has been studied on two shared memory multiprocessors of fairly different character. The method has simplified the writing of highly parallel programs within a limited class of parallel algorithms and is being extended to cover a broader class. The individual parallel constructs which comprise the force methodology are discussed. Of central concern are their semantics, implementation on different architectures and performance implications.

  12. Parallel Algorithms for Normalization

    CERN Document Server

    Boehm, Janko; Laplagne, Santiago; Pfister, Gerhard; Steenpass, Andreas; Steidel, Stefan

    2011-01-01

    Given a reduced affine algebra A over a perfect field K, we present parallel algorithms to compute the normalization \\bar{A} of A. Our starting point is the algorithm of Greuel, Laplagne, and Seelisch, which is an improvement of de Jong's algorithm. First, we propose to stratify the singular locus Sing(A) in a way which is compatible with normalization, apply a local version of the normalization algorithm at each stratum, and find \\bar{A} by putting the local results together. Second, in the case where K = Q is the field of rationals, we propose modular versions of the global and local algorithms. We have implemented our algorithms in the computer algebra system SINGULAR and compare their performance with that of other algorithms. In the case where K = Q, we also discuss the use of modular computations of Groebner bases, radicals and primary decompositions. We point out that in most examples, the new algorithms outperform the algorithm of Greuel, Laplagne, and Seelisch by far, even if we do not run them in pa...

  13. Performance prediction of PARALLEL systems by simulation

    OpenAIRE

    Luque, E.; R. Suppi; T. Margalef; Sorribes, J.; Hernández, P.; E. César; Serrano, M.; C. Ortet; F. Cores; J. Falguera

    2012-01-01

    The simulation of parallel systems is an alternative approach to classical parallel system programming. This simulation provides performance prediction results that allow the reduction of parallel program development time-scale. Simulation requires accurate models of the parallel program, parallel architecture and all real characteristics of the parallel system. This paper describes a simulation environment of parallel systems that is included in a parallel  programming environment in or...

  14. Parallel Programming in the Age of Ubiquitous Parallelism

    Science.gov (United States)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  15. Sunglint Detection for Unmanned and Automated Platforms

    Directory of Open Access Journals (Sweden)

    Oliver Zielinski

    2012-09-01

    Full Text Available We present an empirical quality control protocol for above-water radiometric sampling focussing on identifying sunglint situations. Using hyperspectral radiometers, measurements were taken on an automated and unmanned seaborne platform in northwest European shelf seas. In parallel, a camera system was used to capture sea surface and sky images of the investigated points. The quality control consists of meteorological flags, to mask dusk, dawn, precipitation and low light conditions, utilizing incoming solar irradiance (ES spectra. Using 629 from a total of 3,121 spectral measurements that passed the test conditions of the meteorological flagging, a new sunglint flag was developed. To predict sunglint conspicuous in the simultaneously available sea surface images a sunglint image detection algorithm was developed and implemented. Applying this algorithm, two sets of data, one with (having too much or detectable white pixels or sunglint and one without sunglint (having least visible/detectable white pixel or sunglint, were derived. To identify the most effective sunglint flagging criteria we evaluated the spectral characteristics of these two data sets using water leaving radiance (LW and remote sensing reflectance (RRS. Spectral conditions satisfying ‘mean LW (700–950 nm < 2 mW∙m−2∙nm−1∙Sr−1’ or alternatively ‘minimum RRS (700–950 nm < 0.010 Sr−1’, mask most measurements affected by sunglint, providing an efficient empirical flagging of sunglint in automated quality control.

  16. Parallel Backtracking with Answer Memoing for Independent And-Parallelism

    CERN Document Server

    de Guzmán, Pablo Chico; Carro, Manuel; Hermenegildo, Manuel V

    2011-01-01

    Goal-level Independent and-parallelism (IAP) is exploited by scheduling for simultaneous execution two or more goals which will not interfere with each other at run time. This can be done safely even if such goals can produce multiple answers. The most successful IAP implementations to date have used recomputation of answers and sequentially ordered backtracking. While in principle simplifying the implementation, recomputation can be very inefficient if the granularity of the parallel goals is large enough and they produce several answers, while sequentially ordered backtracking limits parallelism. And, despite the expected simplification, the implementation of the classic schemes has proved to involve complex engineering, with the consequent difficulty for system maintenance and extension, while still frequently running into the well-known trapped goal and garbage slot problems. This work presents an alternative parallel backtracking model for IAP and its implementation. The model features parallel out-of-or...

  17. Maneuver Automation Software

    Science.gov (United States)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; Illsley, Jeannette

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  18. Automated Test Case Generation

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  19. Automation from pictures

    International Nuclear Information System (INIS)

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. (author)

  20. Automated digital magnetofluidics

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, J; Garcia, A A; Marquez, M [Harrington Department of Bioengineering Arizona State University, Tempe AZ 85287-9709 (United States)], E-mail: tony.garcia@asu.edu

    2008-08-15

    Drops can be moved in complex patterns on superhydrophobic surfaces using a reconfigured computer-controlled x-y metrology stage with a high degree of accuracy, flexibility, and reconfigurability. The stage employs a DMC-4030 controller which has a RISC-based, clock multiplying processor with DSP functions, accepting encoder inputs up to 22 MHz, provides servo update rates as high as 32 kHz, and processes commands at rates as fast as 40 milliseconds. A 6.35 mm diameter cylindrical NdFeB magnet is translated by the stage causing water drops to move by the action of induced magnetization of coated iron microspheres that remain in the drop and are attracted to the rare earth magnet through digital magnetofluidics. Water drops are easily moved in complex patterns in automated digital magnetofluidics at an average speed of 2.8 cm/s over a superhydrophobic polyethylene surface created by solvent casting. With additional components, some potential uses for this automated microfluidic system include characterization of superhydrophobic surfaces, water quality analysis, and medical diagnostics.

  1. Automated Postediting of Documents

    CERN Document Server

    Knight, K; Knight, Kevin; Chander, Ishwar

    1994-01-01

    Large amounts of low- to medium-quality English texts are now being produced by machine translation (MT) systems, optical character readers (OCR), and non-native speakers of English. Most of this text must be postedited by hand before it sees the light of day. Improving text quality is tedious work, but its automation has not received much research attention. Anyone who has postedited a technical report or thesis written by a non-native speaker of English knows the potential of an automated postediting system. For the case of MT-generated text, we argue for the construction of postediting modules that are portable across MT systems, as an alternative to hardcoding improvements inside any one system. As an example, we have built a complete self-contained postediting module for the task of article selection (a, an, the) for English noun phrases. This is a notoriously difficult problem for Japanese-English MT. Our system contains over 200,000 rules derived automatically from online text resources. We report on l...

  2. Parallel Adaptive Mesh Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  3. Testing automation of projects in telecommunication domain

    OpenAIRE

    Alexey, Veselov; Vsevolod, Kotlyarov

    2010-01-01

    This paper presents an integrated approach to testing automation of telecommunication projects along with proposals to automation of conformance testing. The underlying idea is to benefit from combining formal verification and testing automation techniques in order to improve product quality.

  4. Parallelizing Monte Carlo with PMC

    Energy Technology Data Exchange (ETDEWEB)

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  5. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  6. Parallel Computers in Signal Processing

    Directory of Open Access Journals (Sweden)

    Narsingh Deo

    1985-07-01

    Full Text Available Signal processing often requires a great deal of raw computing power for which it is important to take a look at parallel computers. The paper reviews various types of parallel computer architectures from the viewpoint of signal and image processing.

  7. Reordering computations for parallel execution

    Science.gov (United States)

    Adams, L.

    1985-01-01

    The computations are reordered in the SOR algorithm to maintain the same asymptotic rate of convergence as the rowwise ordering to obtain parallelism at different levels. A parallel program is written to illustrate these ideas and actual machines for implementation of this program are discussed.

  8. Parallel FFT using Eden Skeletons

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Lobachev, Oleg;

    2009-01-01

    approaches like calculating FFT using a parallel map-and-transpose skeleton provide more flexibility to overcome these problems. Assuming a distributed access to input data and re-organising computation to return results in a distributed way improves the parallel runtime behaviour....

  9. Parallel context-free languages

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The relation between the family of context-free languages and the family of parallel context-free languages is examined in this paper. It is proved that the families are incomparable. Finally we prove that the family of languages of finite index is contained in the family of parallel context......-free languages....

  10. Parallel Estimation Respecting Constraints of Parametric Models of Cold Rolling

    Czech Academy of Sciences Publication Activity Database

    Ettler, P.; Kárný, Miroslav

    Cape Town: IFAC, 2010, s. 1-6. [Symposium on Automation in Mining, Mineral and Metal Processing /13./. Cape Town (ZA), 02.08.2010-04.08.2010] R&D Projects: GA MŠk 1M0572; GA MŠk(CZ) 7D09008 Institutional research plan: CEZ:AV0Z10750506 Keywords : parameter estimation * process model * identification * steel industry Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2010/AS/karny-parallel estimation respecting constraints of parametric models of cold rolling.pdf

  11. Automated Methods of Corrosion Measurements

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    . Mechanical control, recording, and data processing must therefore be automated to a high level of precision and reliability. These general techniques and the apparatus involved have been described extensively. The automated methods of such high-resolution microscopy coordinated with computerized...

  12. Opening up Library Automation Software

    Science.gov (United States)

    Breeding, Marshall

    2009-01-01

    Throughout the history of library automation, the author has seen a steady advancement toward more open systems. In the early days of library automation, when proprietary systems dominated, the need for standards was paramount since other means of inter-operability and data exchange weren't possible. Today's focus on Application Programming…

  13. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    productivity growth than other firms. Moreover, automation improves the efficiency of all stages of the production process by reducing setup time, run time, and inspection time and increasing uptime and quantity produced per worker. The efficiency improvement varies by type of automation....

  14. Automated separation for heterogeneous immunoassays

    OpenAIRE

    Truchaud, A.; Barclay, J; Yvert, J. P.; Capolaghi, B.

    1991-01-01

    Beside general requirements for modern automated systems, immunoassay automation involves specific requirements as a separation step for heterogeneous immunoassays. Systems are designed according to the solid phase selected: dedicated or open robots for coated tubes and wells, systems nearly similar to chemistry analysers in the case of magnetic particles, and a completely original design for those using porous and film materials.

  15. Automated Test-Form Generation

    Science.gov (United States)

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  16. Automated Methods Of Corrosion Measurements

    DEFF Research Database (Denmark)

    Bech-Nielsen, Gregers; Andersen, Jens Enevold Thaulov; Reeve, John Ch;

    1997-01-01

    The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell.......The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell....

  17. Shifting Control Algorithm for a Single-Axle Parallel Plug-In Hybrid Electric Bus Equipped with EMT

    OpenAIRE

    Yunyun Yang; Sen Wu; Xiang Fu

    2014-01-01

    Combining the characteristics of motor with fast response speed, an electric-drive automated mechanical transmission (EMT) is proposed as a novel type of transmission in this paper. Replacing the friction synchronization shifting of automated manual transmission (AMT) in HEVs, the EMT can achieve active synchronization of speed shifting. The dynamic model of a single-axle parallel PHEV equipped with the EMT is built up, and the dynamic properties of the gearshift process are also described. I...

  18. Parallel performance of a preconditioned CG solver for unstructured finite element applications

    Energy Technology Data Exchange (ETDEWEB)

    Shadid, J.N.; Hutchinson, S.A.; Moffat, H.K. [Sandia National Labs., Albuquerque, NM (United States)

    1994-12-31

    A parallel unstructured finite element (FE) implementation designed for message passing MIMD machines is described. This implementation employs automated problem partitioning algorithms for load balancing unstructured grids, a distributed sparse matrix representation of the global finite element equations and a parallel conjugate gradient (CG) solver. In this paper a number of issues related to the efficient implementation of parallel unstructured mesh applications are presented. These include the differences between structured and unstructured mesh parallel applications, major communication kernels for unstructured CG solvers, automatic mesh partitioning algorithms, and the influence of mesh partitioning metrics on parallel performance. Initial results are presented for example finite element (FE) heat transfer analysis applications on a 1024 processor nCUBE 2 hypercube. Results indicate over 95% scaled efficiencies are obtained for some large problems despite the required unstructured data communication.

  19. Adaptive Parallel Iterative Deepening Search

    CERN Document Server

    Cook, D J; 10.1613/jair.518

    2011-01-01

    Many of the artificial intelligence techniques developed to date rely on heuristic search through large spaces. Unfortunately, the size of these spaces and the corresponding computational effort reduce the applicability of otherwise novel and effective algorithms. A number of parallel and distributed approaches to search have considerably improved the performance of the search process. Our goal is to develop an architecture that automatically selects parallel search strategies for optimal performance on a variety of search problems. In this paper we describe one such architecture realized in the Eureka system, which combines the benefits of many different approaches to parallel heuristic search. Through empirical and theoretical analyses we observe that features of the problem space directly affect the choice of optimal parallel search strategy. We then employ machine learning techniques to select the optimal parallel search strategy for a given problem space. When a new search task is input to the system, Eu...

  20. Parallel contingency statistics with Titan.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  1. Unit Commitment Using Parallel Genetic Algorithms and Parallel Tabu Search

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Deok Hwan; Kang, Hyun Tae; Kwon, Jung Uk; Kim, Hyung Su; Park, Jung Ho; Hwang, Gi Hyun [Pusan National University (Korea)

    2001-07-01

    This paper presents the application of parallel genetic algorithm and parallel tabu search to search an optimal solution of a unit commitment problem. The proposed method previously searches the solution globally using the parallel genetic algorithm, and then searches the solution locally using tabu search which has the good local search characteristic to reduce the computation time. This method combines the benefit of both method, and thus improves the performance. To show the usefulness of the proposed method, we simulated for 10 units system. Numerical results show the improvement of cost and computation time compared to previous obtained results. (author). 6 refs., 6 figs., 3 tabs.

  2. Automated Standard Hazard Tool

    Science.gov (United States)

    Stebler, Shane

    2014-01-01

    The current system used to generate standard hazard reports is considered cumbersome and iterative. This study defines a structure for this system's process in a clear, algorithmic way so that standard hazard reports and basic hazard analysis may be completed using a centralized, web-based computer application. To accomplish this task, a test server is used to host a prototype of the tool during development. The prototype is configured to easily integrate into NASA's current server systems with minimal alteration. Additionally, the tool is easily updated and provides NASA with a system that may grow to accommodate future requirements and possibly, different applications. Results of this project's success are outlined in positive, subjective reviews complete by payload providers and NASA Safety and Mission Assurance personnel. Ideally, this prototype will increase interest in the concept of standard hazard automation and lead to the full-scale production of a user-ready application.

  3. Expedition automated flow fluorometer

    Science.gov (United States)

    Krikun, V. A.; Salyuk, P. A.

    2015-11-01

    This paper describes an apparatus and operation of automated flow-through dual-channel fluorometer for studying the fluorescence of dissolved organic matter, and the fluorescence of phytoplankton cells with open and closed reaction centers in sea areas with oligotrophic and eutrophic water type. The step-by step excitation by two semiconductor lasers or two light-emitting diodes is realized in the current device. The excitation wavelengths are 405nm and 532nm in the default configuration. Excitation radiation of each light source can be changed with different durations, intensities and repetition rate. Registration of the fluorescence signal carried out by two photo-multipliers with different optical filters of 580-600 nm and 680-700 nm band pass diapasons. The configuration of excitation sources and spectral diapasons of registered radiation can be changed due to decided tasks.

  4. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  5. Berkeley automated supernova search

    International Nuclear Information System (INIS)

    The Berkeley automated supernova search employs a computer controlled 36-inch telescope and charge coupled device (CCD) detector to image 2500 galaxies per night. A dedicated minicomputer compares each galaxy image with stored reference data to identify supernovae in real time. The threshold for detection is m/sub v/ = 18.8. We plan to monitor roughly 500 galaxies in Virgo and closer every night, and an additional 6000 galaxies out to 70 Mpc on a three night cycle. This should yield very early detection of several supernovae per year for detailed study, and reliable premaximum detection of roughly 100 supernovae per year for statistical studies. The search should be operational in mid-1982

  6. Automated synthetic scene generation

    Science.gov (United States)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  7. Automated Electrostatics Environmental Chamber

    Science.gov (United States)

    Calle, Carlos; Lewis, Dean C.; Buchanan, Randy K.; Buchanan, Aubri

    2005-01-01

    The Mars Electrostatics Chamber (MEC) is an environmental chamber designed primarily to create atmospheric conditions like those at the surface of Mars to support experiments on electrostatic effects in the Martian environment. The chamber is equipped with a vacuum system, a cryogenic cooling system, an atmospheric-gas replenishing and analysis system, and a computerized control system that can be programmed by the user and that provides both automation and options for manual control. The control system can be set to maintain steady Mars-like conditions or to impose temperature and pressure variations of a Mars diurnal cycle at any given season and latitude. In addition, the MEC can be used in other areas of research because it can create steady or varying atmospheric conditions anywhere within the wide temperature, pressure, and composition ranges between the extremes of Mars-like and Earth-like conditions.

  8. [From automation to robotics].

    Science.gov (United States)

    1985-01-01

    The introduction of automation into the laboratory of biology seems to be unavoidable. But at which cost, if it is necessary to purchase a new machine for every new application? Fortunately the same image processing techniques, belonging to a theoretic framework called Mathematical Morphology, may be used in visual inspection tasks, both in car industry and in the biology lab. Since the market for industrial robotics applications is much higher than the market of biomedical applications, the price of image processing devices drops, and becomes sometimes less than the price of a complete microscope equipment. The power of the image processing methods of Mathematical Morphology will be illustrated by various examples, as automatic silver grain counting in autoradiography, determination of HLA genotype, electrophoretic gels analysis, automatic screening of cervical smears... Thus several heterogeneous applications may share the same image processing device, provided there is a separate and devoted work station for each of them. PMID:4091303

  9. Parallel NPARC: Implementation and Performance

    Science.gov (United States)

    Townsend, S. E.

    1996-01-01

    Version 3 of the NPARC Navier-Stokes code includes support for large-grain (block level) parallelism using explicit message passing between a heterogeneous collection of computers. This capability has the potential for significant performance gains, depending upon the block data distribution. The parallel implementation uses a master/worker arrangement of processes. The master process assigns blocks to workers, controls worker actions, and provides remote file access for the workers. The processes communicate via explicit message passing using an interface library which provides portability to a number of message passing libraries, such as PVM (Parallel Virtual Machine). A Bourne shell script is used to simplify the task of selecting hosts, starting processes, retrieving remote files, and terminating a computation. This script also provides a simple form of fault tolerance. An analysis of the computational performance of NPARC is presented, using data sets from an F/A-18 inlet study and a Rocket Based Combined Cycle Engine analysis. Parallel speedup and overall computational efficiency were obtained for various NPARC run parameters on a cluster of IBM RS6000 workstations. The data show that although NPARC performance compares favorably with the estimated potential parallelism, typical data sets used with previous versions of NPARC will often need to be reblocked for optimum parallel performance. In one of the cases studied, reblocking increased peak parallel speedup from 3.2 to 11.8.

  10. Practical Parallel External Memory Algorithms via Simulation of Parallel Algorithms

    CERN Document Server

    Robillard, David E

    2010-01-01

    This thesis introduces PEMS2, an improvement to PEMS (Parallel External Memory System). PEMS executes Bulk-Synchronous Parallel (BSP) algorithms in an External Memory (EM) context, enabling computation with very large data sets which exceed the size of main memory. Many parallel algorithms have been designed and implemented for Bulk-Synchronous Parallel models of computation. Such algorithms generally assume that the entire data set is stored in main memory at once. PEMS overcomes this limitation without requiring any modification to the algorithm by using disk space as memory for additional "virtual processors". Previous work has shown this to be a promising approach which scales well as computational resources (i.e. processors and disks) are added. However, the technique incurs significant overhead when compared with purpose-built EM algorithms. PEMS2 introduces refinements to the simulation process intended to reduce this overhead as well as the amount of disk space required to run the simulation. New func...

  11. Parallel programming characteristics of a DSP-based parallel system

    Institute of Scientific and Technical Information of China (English)

    GAO Shu; GUO Qing-ping

    2006-01-01

    This paper firstly introduces the structure and working principle of DSP-based parallel system, parallel accelerating board and SHARC DSP chip. Then it pays attention to investigating the system's programming characteristics, especially the mode of communication, discussing how to design parallel algorithms and presenting a domain-decomposition-based complete multi-grid parallel algorithm with virtual boundary forecast (VBF) to solve a lot of large-scale and complicated heat problems. In the end, Mandelbrot Set and a non-linear heat transfer equation of ceramic/metal composite material are taken as examples to illustrate the implementation of the proposed algorithm. The results showed that the solutions are highly efficient and have linear speedup.

  12. Parallel Backtracking with Answer Memoing for Independent And-Parallelism

    OpenAIRE

    Chico de Guzmán, Pablo; Casas, Amadeo; Carro Liñares, Manuel; Hermenegildo, Manuel V.

    2011-01-01

    Goal-level Independent and-parallelism (IAP) is exploited by scheduling for simultaneous execution two or more goals which will not interfere with each other at run time. This can be done safely even if such goals can produce múltiple answers. The most successful IAP implementations to date have used recomputation of answers and sequentially ordered backtracking. While in principie simplifying the implementation, recomputation can be very inefficient if the granularity of the parallel goal...

  13. Communication scheduling in parallel task executions on large parallel systems

    OpenAIRE

    Hai Xiang Lin

    2012-01-01

    Scheduling is an important issue in parallel processing. Most scheduling algorithms makes assigns tasks in a direct acyclic graph (DAG) to processors. Usually only the allocation and ordering of tasks are considered, and sometimes communication time is included in the determination of priorities of the tasks, however, communication messages are not explicitly scheduled. Moreover, communication contention plays an increasing important role because of the increased system size of parallel compu...

  14. Multigrid on massively parallel architectures

    International Nuclear Information System (INIS)

    The scalable implementation of multigrid methods for machines with several thousands of processors is investigated. Parallel performance models are presented for three different structured-grid multigrid algorithms, and a description is given of how these models can be used to guide implementation. Potential pitfalls are illustrated when moving from moderate-sized parallelism to large-scale parallelism, and results are given from existing multigrid codes to support the discussion. Finally, the use of mixed programming models is investigated for multigrid codes on clusters of SMPs

  15. Software for parallel processing applications

    International Nuclear Information System (INIS)

    Parallel computing has been used to solve large computing problems in high-energy physics. Typical problems include offline event reconstruction, monte carlo event-generation and reconstruction, and lattice QCD calculations. Fermilab has extensive experience in parallel computing using CPS (cooperative processes software) and networked UNIX workstations for the loosely-coupled problems of event reconstruction and monte carlo generation and CANOPY and ACPMAPS for Lattice QCD. Both systems will be discussed. Parallel software has been developed by many other groups, both commercial and research-oriented. Examples include PVM, Express and network-Linda for workstation clusters and PCN and STRAND88 for more tightly-coupled machines

  16. Parallel Architecture For Robotics Computation

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  17. Parallel Triangles Counting Using Pipelining

    OpenAIRE

    Aráoz, Julián; Zoltan, Cristina

    2015-01-01

    The generalized method to have a parallel solution to a computational problem, is to find a way to use Divide & Conquer paradigm in order to have processors acting on its own data and therefore all can be scheduled in parallel. MapReduce is an example of this approach: Input data is transformed by the mappers, in order to feed the reducers that can run in parallel. In general this schema gives efficient problem solutions, but it stops being true when the replication factor grows. We present a...

  18. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 109 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author)

  19. Automating the radiographic ndt process

    International Nuclear Information System (INIS)

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justification for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 109 bits per 14 x 17. This is equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 inch film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in separate parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system

  20. Automated Fluid Interface System (AFIS)

    Science.gov (United States)

    1990-01-01

    Automated remote fluid servicing will be necessary for future space missions, as future satellites will be designed for on-orbit consumable replenishment. In order to develop an on-orbit remote servicing capability, a standard interface between a tanker and the receiving satellite is needed. The objective of the Automated Fluid Interface System (AFIS) program is to design, fabricate, and functionally demonstrate compliance with all design requirements for an automated fluid interface system. A description and documentation of the Fairchild AFIS design is provided.

  1. Sub-Second Parallel State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.; Wang, Shaobu; Huang, Zhenyu

    2014-10-31

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects

  2. Parallel processing in core supervision

    International Nuclear Information System (INIS)

    SCORPIO is a computer-based supervision system intended as a tool for operators of nuclear plants. In this paper the main functions of the system are defined, and the applicability of parallel processing in core supervision is discussed. (orig.)

  3. Some parallel banded system solvers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sameh, A.H.

    1984-12-01

    This paper describes algorithms for solving narrow banded systems and the Helmholtz difference equations that are suitable for multiprocessing systems. The organization of the algorithms highlight the large grain parallelism inherent in the problems. 13 references, 5 tables.

  4. Predicting performance of parallel computations

    Science.gov (United States)

    Mak, Victor W.; Lundstrom, Stephen F.

    1990-01-01

    An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method is validated against both detailed simulation and actual execution on a commercial multiprocessor. Using 100 test cases, the average error of the prediction when compared to simulation statistics is 1.7 percent, with a standard deviation of 1.5 percent; the maximum error is about 10 percent.

  5. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....... common ancestors, tree contraction and expression tree evaluation. We also study the problems of computing the connected and biconnected components of a graph, minimum spanning tree of a connected graph and ear decomposition of a biconnected graph. All our solutions on a P-processor PEM model provide...

  6. Parallel programming of industrial applications

    Energy Technology Data Exchange (ETDEWEB)

    Heroux, M; Koniges, A; Simon, H

    1998-07-21

    In the introductory material, we overview the typical MPP environment for real application computing and the special tools available such as parallel debuggers and performance analyzers. Next, we draw from a series of real applications codes and discuss the specific challenges and problems that are encountered in parallelizing these individual applications. The application areas drawn from include biomedical sciences, materials processing and design, plasma and fluid dynamics, and others. We show how it was possible to get a particular application to run efficiently and what steps were necessary. Finally we end with a summary of the lessons learned from these applications and predictions for the future of industrial parallel computing. This tutorial is based on material from a forthcoming book entitled: "Industrial Strength Parallel Computing" to be published by Morgan Kaufmann Publishers (ISBN l-55860-54).

  7. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  8. Parallel encoders for pixel detectors

    International Nuclear Information System (INIS)

    A new method of fast encoding and determining the multiplicity and coordinates of fired pixels is described. A specific example construction of parallel encodes and MCC for n=49 and t=2 is given. 16 refs.; 6 figs.; 2 tabs

  9. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  10. HEATR project: ATR algorithm parallelization

    Science.gov (United States)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  11. Tactile Displays with Parallel Mechanism

    OpenAIRE

    Kyung, Ki-Uk; Kwon, Dong-Soo

    2008-01-01

    This chapter deals with tactile displays and their mechanisms. We briefly reviewed research history of mechanical type tactile displays and their parallel arrangement. And this chapter mainly describes two systems including tactile displays. The 5x6 pin arrayed tactile display with parallel arrangement of piezoelectric bimorphs has been described in the section 3. The tactile display has been embedded into a mouse device and the performance of the device has been verified from pattern display...

  12. Explicit Parallel Programming: System Description

    OpenAIRE

    Gamble, Jim; Ribbens, Calvin J.

    1991-01-01

    The implementation of the Explicit Parallel Programming (EPP) system is described. EPP is a prototype implementation of a language for writing parallel programs for shared memory multiprocessors. EPP may be viewed as a coordination language, since it is used to define the sequencing or ordering of various tasks, while the tasks themselves are defined in some other compilable language. The two main components of the EPP system---a compiler and an executive---are described in this report. An...

  13. Explicit Parallel Programming: User's Guide

    OpenAIRE

    Gamble, Jim; Ribbens, Calvin J.

    1991-01-01

    The Explicit Parallel Programming (EPP) language is defined and illustrated with several examples. EPP is a prototype implementation of a language for writing parallel programs for shared memory multiprocessors. EPP may be viewed as a coordination language, since it is used to define the sequencing or ordering of various tasks, while the tasks themselves are defined in some other compilable language. The prototype described here requires FORTRAN as the base language, but there is no inheren...

  14. High availability for parallel computers

    OpenAIRE

    Rexachs del Rosario, Dolores; Luque Fadón, Emilio

    2010-01-01

    Fault tolerance has become an important issue for parallel applications in the last few years. The parallel systems' users want them to be reliable considering two main dimensions, availability and data consistency. Availability can be provided with solutions such as RADIC, a fault tolerant architecture with different protection levels, offering high availability with transparency, decentralization, flexibility and scalability for message-passing systems. Transient faults may cause an applica...

  15. Parallel computation of normal forms

    OpenAIRE

    Roldán González, Pablo

    2004-01-01

    The time to compute normal forms numerically grows very fast with respect to the degree of the normal form and the dimension of the system. We design and implement a parallel algorithm based on the parallel computation of Poisson brackets. The implementation is based on previous work by A.~Jorba, and uses the C programming language and PVM. We present a theoretical and empirical efficiency analysis of the algorithm. We also suggest some possible improvements of the algorithm.

  16. Parallel Krylov subspace basis computation

    OpenAIRE

    Sidje, R. B.; B. Philippe

    1994-01-01

    Numerical methods related on Krylov subspaces are widely used in large sparse numerical linear algebra. Vectors in these subspaces are manipulated through their representation onto orthonormal bases. Nowadays, on serial computers, the method of Arnoldi is considered as a reliable technique for constructing such bases. Unfortunately, this technique is rather inflexible to be efficiently implemented on parallel computers. In this report we examine several parallel and stable algorithms based on...

  17. Teaching Parallel Programming Using Java

    OpenAIRE

    Shafi, Aamir; Akhtar, Aleem; Javed, Ansar; Carpenter, Bryan

    2014-01-01

    This paper presents an overview of the "Applied Parallel Computing" course taught to final year Software Engineering undergraduate students in Spring 2014 at NUST, Pakistan. The main objective of the course was to introduce practical parallel programming tools and techniques for shared and distributed memory concurrent systems. A unique aspect of the course was that Java was used as the principle programming language. The course was divided into three sections. The first section covered paral...

  18. Automated continuous verification and validation for numerical simulation

    Directory of Open Access Journals (Sweden)

    P. E. Farrell

    2010-09-01

    Full Text Available Verification and validation are crucially important for the final users of a computational model: code is useless if its results cannot be relied upon. Typically, undergoing these processes is seen as a discrete event, performed once and for all after development is complete. However, this does not reflect the reality that many geoscientific codes undergo continuous development of the mathematical model, discretisation and software implementation. Therefore, we advocate that in such cases verification and validation must be continuous and happen in parallel with development. The desirability of their automation follows immediately. This paper discusses a framework for automated continuous verification and validation of wide applicability to any kind of numerical simulation. It also documents a range of rigorous test cases for use in computational and geophysical fluid dynamics.

  19. Automation of servicibility of radio-relay station equipment

    Science.gov (United States)

    Uryev, A. G.; Mishkin, Y. I.; Itkis, G. Y.

    1985-03-01

    Automation of the serviceability of radio relay station equipment must ensure central gathering and primary processing of reliable instrument reading with subsequent display on the control panel, detection and recording of failures soon enough, advance enough warning based on analysis of detertioration symptoms, and correct remote measurement of equipment performance parameters. Such an inspection will minimize transmission losses while reducing nonproductive time and labor spent on documentation and measurement. A multichannel automated inspection system for this purpose should operate by a parallel rather than sequential procedure. Digital data processing is more expedient in this case than analog method and, therefore, analog to digital converters are required. Spepcial normal, above limit and below limit test signals provide means of self-inspection, to which must be added adequate interference immunization, stabilization, and standby power supply. Use of a microcomputer permits overall refinement and expansion of the inspection system while it minimizes though not completely eliminates dependence on subjective judgment.

  20. Automated business processes in outbound logistics: An information system perspective

    DEFF Research Database (Denmark)

    Tambo, Torben

    2010-01-01

    This article analyses the potentials and possibilities of changing outbound logistics from highly labour intensive on the information processing side to a more or less fully automated solution. Automation offers advantages in terms of direct labour cost reduction as well as indirect cost reduction...... process alignment with a highly standardised outbound logistics although serving a vast range of customers and countries. Expressing a number of compliance requirements and associated business processes outlines the design criteria for the information system. Implementation of this design with bespoke ERP...... is not a matter of whether the system can or cannot, but a matter of making a technological and economical best fit. Along the formal implementation issues there is a parallel process focused on a mutuality between IT teams, business users, management and external stakeholders in offering relevant...

  1. Tools for the Automation of Large Distributed Control Systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit - SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting is real-time to changes in the system, thus providing for the automation of standard procedures and for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  2. Tools for the automation of large control systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit – SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting in real-time to changes in the system, thus providing for the automation of standard procedures and the for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  3. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  4. Using Natural Language Processing to Improve Accuracy of Automated Notifiable Disease Reporting

    OpenAIRE

    Friedlin, Jeff; Grannis, Shaun; Overhage, J Marc

    2008-01-01

    We examined whether using a natural language processing (NLP) system results in improved accuracy and completeness of automated electronic laboratory reporting (ELR) of notifiable conditions. We used data from a community-wide health information exchange that has automated ELR functionality. We focused on methicillin-resistant Staphylococcus Aureus (MRSA), a reportable infection found in unstructured, free-text culture result reports. We used the Regenstrief EXtraction tool (REX) for this wor...

  5. National Automated Conformity Inspection Process

    Data.gov (United States)

    Department of Transportation — The National Automated Conformity Inspection Process (NACIP) Application is intended to expedite the workflow process as it pertains to the FAA Form 81 0-10 Request...

  6. Automating the Purple Crow Lidar

    Science.gov (United States)

    Hicks, Shannon; Sica, R. J.; Argall, P. S.

    2016-06-01

    The Purple Crow LiDAR (PCL) was built to measure short and long term coupling between the lower, middle, and upper atmosphere. The initial component of my MSc. project is to automate two key elements of the PCL: the rotating liquid mercury mirror and the Zaber alignment mirror. In addition to the automation of the Zaber alignment mirror, it is also necessary to describe the mirror's movement and positioning errors. Its properties will then be added into the alignment software. Once the alignment software has been completed, we will compare the new alignment method with the previous manual procedure. This is the first among several projects that will culminate in a fully-automated lidar. Eventually, we will be able to work remotely, thereby increasing the amount of data we collect. This paper will describe the motivation for automation, the methods we propose, preliminary results for the Zaber alignment error analysis, and future work.

  7. Home automation with Intel Galileo

    CERN Document Server

    Dundar, Onur

    2015-01-01

    This book is for anyone who wants to learn Intel Galileo for home automation and cross-platform software development. No knowledge of programming with Intel Galileo is assumed, but knowledge of the C programming language is essential.

  8. Museum Automation with RFID

    OpenAIRE

    Sahba, Farshid; Nazaridoust, Maryam

    2014-01-01

    By increase of culture and knowledge of the people, request for visiting museums has increased and made the management of these places more complex. Valuable things in a museum or ancient place must be maintained well and also it need to managing visitors. To maintain things we should prevent them from theft, as well as environmental factors such as temperature, humidity, PH, chemical factors and mechanical events should be monitored. And if the conditions are damaging, appropriate alerts or ...

  9. Towards automated traceability maintenance.

    Science.gov (United States)

    Mäder, Patrick; Gotel, Orlena

    2012-10-01

    Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided. PMID:23471308

  10. Automated document analysis system

    Science.gov (United States)

    Black, Jeffrey D.; Dietzel, Robert; Hartnett, David

    2002-08-01

    A software application has been developed to aid law enforcement and government intelligence gathering organizations in the translation and analysis of foreign language documents with potential intelligence content. The Automated Document Analysis System (ADAS) provides the capability to search (data or text mine) documents in English and the most commonly encountered foreign languages, including Arabic. Hardcopy documents are scanned by a high-speed scanner and are optical character recognized (OCR). Documents obtained in an electronic format bypass the OCR and are copied directly to a working directory. For translation and analysis, the script and the language of the documents are first determined. If the document is not in English, the document is machine translated to English. The documents are searched for keywords and key features in either the native language or translated English. The user can quickly review the document to determine if it has any intelligence content and whether detailed, verbatim human translation is required. The documents and document content are cataloged for potential future analysis. The system allows non-linguists to evaluate foreign language documents and allows for the quick analysis of a large quantity of documents. All document processing can be performed manually or automatically on a single document or a batch of documents.

  11. Automated Supernova Discovery (Abstract)

    Science.gov (United States)

    Post, R. S.

    2015-12-01

    (Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.

  12. Automated Stellar Spectral Classification

    Science.gov (United States)

    Bailer-Jones, Coryn; Irwin, Mike; von Hippel, Ted

    1996-05-01

    Stellar classification has long been a useful tool for probing important astrophysical phenomena. Beyond simply categorizing stars it yields fundamental stellar parameters, acts as a probe of galactic abundance distributions and gives a first foothold on the cosmological distance ladder. The MK system in particular has survived on account of its robustness to changes in the calibrations of the physical parameters. Nonetheless, if stellar classification is to continue as a useful tool in stellar surveys, then it must adapt to keep pace with the large amounts of data which will be acquired as magnitude limits are pushed ever deeper. We are working on a project to automate the multi-parameter classification of visual stellar spectra, using artificial neural networks and other techniques. Our techniques have been developed with 10,000 spectra (B Analysis as a front-end compression of the data. Our continuing work also looks at the application of synthetic spectra to the direct classification of spectra in terms of the physical parameters of Teff, log g, and [Fe/H].

  13. Genetic circuit design automation.

    Science.gov (United States)

    Nielsen, Alec A K; Der, Bryan S; Shin, Jonghyeon; Vaidyanathan, Prashant; Paralanov, Vanya; Strychalski, Elizabeth A; Ross, David; Densmore, Douglas; Voigt, Christopher A

    2016-04-01

    Computation can be performed in living cells by DNA-encoded circuits that process sensory information and control biological functions. Their construction is time-intensive, requiring manual part assembly and balancing of regulator expression. We describe a design environment, Cello, in which a user writes Verilog code that is automatically transformed into a DNA sequence. Algorithms build a circuit diagram, assign and connect gates, and simulate performance. Reliable circuit design requires the insulation of gates from genetic context, so that they function identically when used in different circuits. We used Cello to design 60 circuits forEscherichia coli(880,000 base pairs of DNA), for which each DNA sequence was built as predicted by the software with no additional tuning. Of these, 45 circuits performed correctly in every output state (up to 10 regulators and 55 parts), and across all circuits 92% of the output states functioned as predicted. Design automation simplifies the incorporation of genetic circuits into biotechnology projects that require decision-making, control, sensing, or spatial organization. PMID:27034378

  14. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  15. 2015 Chinese Intelligent Automation Conference

    CERN Document Server

    Li, Hongbo

    2015-01-01

    Proceedings of the 2015 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’15, held in Fuzhou, China. The topics include adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, reconfigurable control, etc. Engineers and researchers from academia, industry and the government can gain valuable insights into interdisciplinary solutions in the field of intelligent automation.

  16. Aprendizaje automático

    OpenAIRE

    Moreno, Antonio

    2006-01-01

    En este libro se introducen los conceptos básicos en una de las ramas más estudiadas actualmente dentro de la inteligencia artificial: el aprendizaje automático. Se estudian temas como el aprendizaje inductivo, el razonamiento analógico, el aprendizaje basado en explicaciones, las redes neuronales, los algoritmos genéticos, el razonamiento basado en casos o las aproximaciones teóricas al aprendizaje automático.

  17. A Method for Automated Program Code Testing

    Directory of Open Access Journals (Sweden)

    Sigitas DRĄSUTIS

    2010-10-01

    Full Text Available The Internet has recently encouraged the society to convert almost all its needs to electronic resources such as e-libraries, e-cultures, e-entertainment as well as e-learning, which has become a radical idea to increase the effectiveness of learning services in most schools, colleges and universities. E-learning can not be completely featured and met without e-testing. However, in many cases e-testing tools are suitable just for traditional/theoretical knowledge testing, covered by such items as questions, quizzes, matching boxes and other. The article ``A Method for Automated Program Code Testing'' tackles the lack of functions in e-testing systems and suggests e-assessment possibilities for students who study computer science, especially programming. The article analyzes the method that allows freely entering answers to questions, checking program syntax during the testing and enables automatic written code checking and evaluation.

  18. A Droplet Microfluidic Platform for Automating Genetic Engineering.

    Science.gov (United States)

    Gach, Philip C; Shih, Steve C C; Sustarich, Jess; Keasling, Jay D; Hillson, Nathan J; Adams, Paul D; Singh, Anup K

    2016-05-20

    We present a water-in-oil droplet microfluidic platform for transformation, culture and expression of recombinant proteins in multiple host organisms including bacteria, yeast and fungi. The platform consists of a hybrid digital microfluidic/channel-based droplet chip with integrated temperature control to allow complete automation and integration of plasmid addition, heat-shock transformation, addition of selection medium, culture, and protein expression. The microfluidic format permitted significant reduction in consumption (100-fold) of expensive reagents such as DNA and enzymes compared to the benchtop method. The chip contains a channel to continuously replenish oil to the culture chamber to provide a fresh supply of oxygen to the cells for long-term (∼5 days) cell culture. The flow channel also replenished oil lost to evaporation and increased the number of droplets that could be processed and cultured. The platform was validated by transforming several plasmids into Escherichia coli including plasmids containing genes for fluorescent proteins GFP, BFP and RFP; plasmids with selectable markers for ampicillin or kanamycin resistance; and a Golden Gate DNA assembly reaction. We also demonstrate the applicability of this platform for transformation in widely used eukaryotic organisms such as Saccharomyces cerevisiae and Aspergillus niger. Duration and temperatures of the microfluidic heat-shock procedures were optimized to yield transformation efficiencies comparable to those obtained by benchtop methods with a throughput up to 6 droplets/min. The proposed platform offers potential for automation of molecular biology experiments significantly reducing cost, time and variability while improving throughput. PMID:26830031

  19. The economics of parallel trade.

    Science.gov (United States)

    Danzon, P M

    1998-03-01

    The potential for parallel trade in the European Union (EU) has grown with the accession of low price countries and the harmonisation of registration requirements. Parallel trade implies a conflict between the principle of autonomy of member states to set their own pharmaceutical prices, the principle of free trade and the industrial policy goal of promoting innovative research and development (R&D). Parallel trade in pharmaceuticals does not yield the normal efficiency gains from trade because countries achieve low pharmaceutical prices by aggressive regulation, not through superior efficiency. In fact, parallel trade reduces economic welfare by undermining price differentials between markets. Pharmaceutical R&D is a global joint cost of serving all consumers worldwide; it accounts for roughly 30% of total costs. Optimal (welfare maximising) pricing to cover joint costs (Ramsey pricing) requires setting different prices in different markets, based on inverse demand elasticities. By contrast, parallel trade and regulation based on international price comparisons tend to force price convergence across markets. In response, manufacturers attempt to set a uniform 'euro' price. The primary losers from 'euro' pricing will be consumers in low income countries who will face higher prices or loss of access to new drugs. In the long run, even higher income countries are likely to be worse off with uniform prices, because fewer drugs will be developed. One policy option to preserve price differentials is to exempt on-patent products from parallel trade. An alternative is confidential contracting between individual manufacturers and governments to provide country-specific ex post discounts from the single 'euro' wholesale price, similar to rebates used by managed care in the US. This would preserve differentials in transactions prices even if parallel trade forces convergence of wholesale prices. PMID:10178655

  20. Synchronizing Parallel Tasks Using STM

    Directory of Open Access Journals (Sweden)

    Ryan Saptarshi Ray

    2015-03-01

    Full Text Available The past few years have marked the start of a historic transition from sequential to parallel computation. The necessity to write parallel programs is increasing as systems are getting more complex while processor speed increases are slowing down. Current parallel programming uses low-level programming constructs like threads and explicit synchronization using locks to coordinate thread execution. Parallel programs written with these constructs are difficult to design, program and debug. Also locks have many drawbacks which make them a suboptimal solution. One such drawback is that locks should be only used to enclose the critical section of the parallel-processing code. If locks are used to enclose the entire code then the performance of the code drastically decreases. Software Transactional Memory (STM is a promising new approach to programming shared-memory parallel processors. It is a concurrency control mechanism that is widely considered to be easier to use by programmers than locking. It allows portions of a program to execute in isolation, without regard to other, concurrently executing tasks. A programmer can reason about the correctness of code within a transaction and need not worry about complex interactions with other, concurrently executing parts of the program. If STM is used to enclose the entire code then the performance of the code is the same as that of the code in which STM is used to enclose the critical section only and is far better than code in which locks have been used to enclose the entire code. So STM is easier to use than locks as critical section does not need to be identified in case of STM. This paper shows the concept of writing code using Software Transactional Memory (STM and the performance comparison of codes using locks with those using STM. It also shows why the use of STM in parallel-processing code is better than the use of locks.

  1. Experiments with parallel algorithms for combinatorial problems

    OpenAIRE

    Kindervater, Gerard; Trienekens, H.W.J.M.

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines constructed so far all use a simple model of parallel computation. Therefore, not every existing parallel machine is equally well suited for each type of algorithm. The adaptation of a certain alg...

  2. Urine culture

    Science.gov (United States)

    Culture and sensitivity - urine ... when urinating. You also may have a urine culture after you have been treated for an infection. ... when bacteria or yeast are found in the culture. This likely means that you have a urinary ...

  3. Endocervical culture

    Science.gov (United States)

    Vaginal culture; Female genital tract culture; Culture - cervix ... During a vaginal examination, the health care provider uses a ... fungus grow. Further tests may be done to identify the specific ...

  4. Fecal culture

    Science.gov (United States)

    Stool culture; Culture - stool ... stool tests are done in addition to the culture, such as: Gram stain of stool Fecal smear ... Giannella RA. Infectious enteritis and proctocolitis and bacterial food poisoning. In: Feldman M, Friedman LS, Brandt LJ, ...

  5. Bounded Parallel-Batch Scheduling on Unrelated Parallel Machines

    Science.gov (United States)

    Miao, Cuixia; Zhang, Yuzhong; Wang, Chengfei

    In this paper, we consider the bounded parallel-batch scheduling problem on unrelated parallel machines. Problems R m |B|F are NP-hard for any objective function F. For this reason, we discuss the special case with p ij = p i for i = 1, 2, ⋯ , m , j = 1, 2, ⋯ , n. We give optimal algorithms for the general scheduling to minimize total weighted completion time, makespan and the number of tardy jobs. And we design pseudo-polynomial time algorithms for the case with rejection penalty to minimize the makespan and the total weighted completion time plus the total penalty of the rejected jobs, respectively.

  6. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  7. Evaluating parallel optimization on transputers

    Directory of Open Access Journals (Sweden)

    A.G. Chalmers

    2003-12-01

    Full Text Available The faster processing power of modern computers and the development of efficient algorithms have made it possible for operations researchers to tackle a much wider range of problems than ever before. Further improvements in processing speed can be achieved utilising relatively inexpensive transputers to process components of an algorithm in parallel. The Davidon-Fletcher-Powell method is one of the most successful and widely used optimisation algorithms for unconstrained problems. This paper examines the algorithm and identifies the components that can be processed in parallel. The results of some experiments with these components are presented which indicates under what conditions parallel processing with an inexpensive configuration is likely to be faster than the traditional sequential implementations. The performance of the whole algorithm with its parallel components is then compared with the original sequential algorithm. The implementation serves to illustrate the practicalities of speeding up typical OR algorithms in terms of difficulty, effort and cost. The results give an indication of the savings in time a given parallel implementation can be expected to yield.

  8. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  9. Organizational Culture

    OpenAIRE

    Adrian HUDREA

    2006-01-01

    Cultural orientations of an organization can be its greatest strength, providing the basis for problem solving, cooperation, and communication. Culture, however, can also inhibit needed changes. Cultural changes typically happen slowly – but without cultural change, many other organizational changes are doomed to fail. The dominant culture of an organization is a major contributor to its success. But, of course, no organizational culture is purely one type or another. And the existence of sec...

  10. Safeguards Culture

    Energy Technology Data Exchange (ETDEWEB)

    Frazar, Sarah L.; Mladineo, Stephen V.

    2012-07-01

    The concepts of nuclear safety and security culture are well established; however, a common understanding of safeguards culture is not internationally recognized. Supported by the National Nuclear Security Administration, the authors prepared this report, an analysis of the concept of safeguards culture, and gauged its value to the safeguards community. The authors explored distinctions between safeguards culture, safeguards compliance, and safeguards performance, and evaluated synergies and differences between safeguards culture and safety/security culture. The report concludes with suggested next steps.

  11. Automated ship image acquisition

    Science.gov (United States)

    Hammond, T. R.

    2008-04-01

    The experimental Automated Ship Image Acquisition System (ASIA) collects high-resolution ship photographs at a shore-based laboratory, with minimal human intervention. The system uses Automatic Identification System (AIS) data to direct a high-resolution SLR digital camera to ship targets and to identify the ships in the resulting photographs. The photo database is then searchable using the rich data fields from AIS, which include the name, type, call sign and various vessel identification numbers. The high-resolution images from ASIA are intended to provide information that can corroborate AIS reports (e.g., extract identification from the name on the hull) or provide information that has been omitted from the AIS reports (e.g., missing or incorrect hull dimensions, cargo, etc). Once assembled into a searchable image database, the images can be used for a wide variety of marine safety and security applications. This paper documents the author's experience with the practicality of composing photographs based on AIS reports alone, describing a number of ways in which this can go wrong, from errors in the AIS reports, to fixed and mobile obstructions and multiple ships in the shot. The frequency with which various errors occurred in automatically-composed photographs collected in Halifax harbour in winter time were determined by manual examination of the images. 45% of the images examined were considered of a quality sufficient to read identification markings, numbers and text off the entire ship. One of the main technical challenges for ASIA lies in automatically differentiating good and bad photographs, so that few bad ones would be shown to human users. Initial attempts at automatic photo rating showed 75% agreement with manual assessments.

  12. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  13. Organizational Culture

    Directory of Open Access Journals (Sweden)

    Adrian HUDREA

    2006-02-01

    Full Text Available Cultural orientations of an organization can be its greatest strength, providing the basis for problem solving, cooperation, and communication. Culture, however, can also inhibit needed changes. Cultural changes typically happen slowly – but without cultural change, many other organizational changes are doomed to fail. The dominant culture of an organization is a major contributor to its success. But, of course, no organizational culture is purely one type or another. And the existence of secondary cultures can provide the basis for change. Therefore, organizations need to understand the cultural environments and values.

  14. Parallel Density-Based Clustering for Discovery of Ionospheric Phenomena

    Science.gov (United States)

    Pankratius, V.; Gowanlock, M.; Blair, D. M.

    2015-12-01

    Ionospheric total electron content maps derived from global networks of dual-frequency GPS receivers can reveal a plethora of ionospheric features in real-time and are key to space weather studies and natural hazard monitoring. However, growing data volumes from expanding sensor networks are making manual exploratory studies challenging. As the community is heading towards Big Data ionospheric science, automation and Computer-Aided Discovery become indispensable tools for scientists. One problem of machine learning methods is that they require domain-specific adaptations in order to be effective and useful for scientists. Addressing this problem, our Computer-Aided Discovery approach allows scientists to express various physical models as well as perturbation ranges for parameters. The search space is explored through an automated system and parallel processing of batched workloads, which finds corresponding matches and similarities in empirical data. We discuss density-based clustering as a particular method we employ in this process. Specifically, we adapt Density-Based Spatial Clustering of Applications with Noise (DBSCAN). This algorithm groups geospatial data points based on density. Clusters of points can be of arbitrary shape, and the number of clusters is not predetermined by the algorithm; only two input parameters need to be specified: (1) a distance threshold, (2) a minimum number of points within that threshold. We discuss an implementation of DBSCAN for batched workloads that is amenable to parallelization on manycore architectures such as Intel's Xeon Phi accelerator with 60+ general-purpose cores. This manycore parallelization can cluster large volumes of ionospheric total electronic content data quickly. Potential applications for cluster detection include the visualization, tracing, and examination of traveling ionospheric disturbances or other propagating phenomena. Acknowledgments. We acknowledge support from NSF ACI-1442997 (PI V. Pankratius).

  15. Simplified Automated Image Analysis for Detection and Phenotyping of Mycobacterium tuberculosis on Porous Supports by Monitoring Growing Microcolonies

    OpenAIRE

    den Hertog, Alice L.; Dennis W Visser; Ingham, Colin J.; Frank H A G Fey; Paul R Klatser; Anthony, Richard M.

    2010-01-01

    BACKGROUND: Even with the advent of nucleic acid (NA) amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS), as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. METHODS: Here we explore the growth of Mycobacterial tubercul...

  16. Visualizing Parallel Computer System Performance

    Science.gov (United States)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  17. PARALLEL SELF-ORGANIZING MAP

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A new self-organizing map, parallel self-organizing map (PSOM), was proposed for information parallel processing purpose. In this model, there are two separate layers of neurons connected together,the number of neurons in both layer and connections between them is equal to the number of total elements of input signals, the weight updating is managed through a sequence of operations among some unitary transformation and operation matrixes, so the conventional repeated learning procedure was modified to learn just once and an algorithm was developed to realize this new learning method. With a typical classification example, the performance of PSOM demonstrated convergence results similar to Kohonen's model. Theoretic analysis and proofs also showed some interesting properties of PSOM. As it was pointed out, the contribution of such a network may not be so significant, but its parallel mode may be interesting for quantum computation.

  18. Massively parallel femtosecond laser processing.

    Science.gov (United States)

    Hasegawa, Satoshi; Ito, Haruyasu; Toyoda, Haruyoshi; Hayasaki, Yoshio

    2016-08-01

    Massively parallel femtosecond laser processing with more than 1000 beams was demonstrated. Parallel beams were generated by a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM). The key to this technique is to optimize the CGH in the laser processing system using a scheme called in-system optimization. It was analytically demonstrated that the number of beams is determined by the horizontal number of pixels in the SLM NSLM that is imaged at the pupil plane of an objective lens and a distance parameter pd obtained by dividing the distance between adjacent beams by the diffraction-limited beam diameter. A performance limitation of parallel laser processing in our system was estimated at NSLM of 250 and pd of 7.0. Based on these parameters, the maximum number of beams in a hexagonal close-packed structure was calculated to be 1189 by using an analytical equation. PMID:27505815

  19. Generalized Quantum Search with Parallelism

    CERN Document Server

    Gingrich, R M; Cerf, N J; Gingrich, Robert; Williams, Colin P.; Cerf, Nicolas

    2000-01-01

    We generalize Grover's unstructured quantum search algorithm to enable it to use an arbitrary starting superposition and an arbitrary unitary matrix simultaneously. We derive an exact formula for the probability of the generalized Grover's algorithm succeeding after n iterations. We show that the fully generalized formula reduces to the special cases considered by previous authors. We then use the generalized formula to determine the optimal strategy for using the unstructured quantum search algorithm. On average the optimal strategy is about 12% better than the naive use of Grover's algorithm. The speedup obtained is not dramatic but it illustrates that a hybrid use of quantum computing and classical computing techniques can yield a performance that is better than either alone. We extend the analysis to the case of a society of k quantum searches acting in parallel. We derive an analytic formula that connects the degree of parallelism with the optimal strategy for k-parallel quantum search. We then derive th...

  20. PARAVT: Parallel Voronoi Tessellation code

    CERN Document Server

    Gonzalez, Roberto E

    2016-01-01

    We present a new open source code for massive parallel computation of Voronoi tessellations(VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition take into account consistent boundary computation between tasks, and support periodic conditions. In addition, the code compute neighbors lists, Voronoi density and Voronoi cell volumes for each particle, and can compute density on a regular grid.

  1. Parallel algorithms for mapping pipelined and parallel computations

    Science.gov (United States)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  2. Automated security response robot

    Science.gov (United States)

    Ciccimaro, Dominic A.; Everett, Hobart R.; Gilbreath, Gary A.; Tran, Tien T.

    1999-01-01

    ROBART III is intended as an advance demonstration platform for non-lethal response measures, extending the concepts of reflexive teleoperation into the realm of coordinated weapons control in law enforcement and urban warfare scenarios. A rich mix of ultrasonic and optical proximity and range sensors facilitates remote operation in unstructured and unexplored buildings with minimal operator supervision. Autonomous navigation and mapping of interior spaces is significantly enhanced by an innovative algorithm which exploits the fact that the majority of man-made structures are characterized by parallel and orthogonal walls. Extremely robust intruder detection and assessment capabilities are achieved through intelligent fusion of a multitude of inputs form various onboard motion sensors. Intruder detection is addressed by a 360-degree staring array of passive-IR motion detectors, augmented by a number of positionable head-mounted sensors. Automatic camera tracking of a moving target is accomplished using a video line digitizer. Non-lethal response systems include a six- barrelled pneumatically-powered Gatling gun, high-powered strobe lights, and three ear-piercing 103-decibel sirens.

  3. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  4. Medipix2 parallel readout system

    Science.gov (United States)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  5. ITER LHe Plants Parallel Operation

    Science.gov (United States)

    Fauve, E.; Bonneton, M.; Chalifour, M.; Chang, H.-S.; Chodimella, C.; Monneret, E.; Vincent, G.; Flavien, G.; Fabre, Y.; Grillot, D.

    The ITER Cryogenic System includes three identical liquid helium (LHe) plants, with a total average cooling capacity equivalent to 75 kW at 4.5 K.The LHe plants provide the 4.5 K cooling power to the magnets and cryopumps. They are designed to operate in parallel and to handle heavy load variations.In this proceedingwe will describe the presentstatusof the ITER LHe plants with emphasis on i) the project schedule, ii) the plantscharacteristics/layout and iii) the basic principles and control strategies for a stable operation of the three LHe plants in parallel.

  6. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    The underlying neural mechanisms of a perceptual bias for in-phase bimanual coordination movements are not well understood. In the present study, we measured brain activity with functional magnetic resonance imaging in healthy subjects during a task, where subjects performed bimanual index finger...... a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...

  7. Parallelization of the PC Algorithm

    DEFF Research Database (Denmark)

    Madsen, Anders Læsø; Jensen, Frank; Salmerón, Antonio; Langseth, Helge; Nielsen, Thomas Dyhre

    2015-01-01

    This paper describes a parallel version of the PC algorithm for learning the structure of a Bayesian network from data. The PC algorithm is a constraint-based algorithm consisting of five steps where the first step is to perform a set of (conditional) independence tests while the remaining four....... The proposed parallel PC algorithm is evaluated on data sets generated at random from five different real- world Bayesian networks. The results demonstrate that significant time performance improvements are possible using the proposed algorithm....

  8. Recent trends in parallel programming

    Czech Academy of Sciences Publication Activity Database

    Jakl, Ondřej

    Ostrava: ÚGN AV ČR, 2007 - (Blaheta, R.; Starý, J.), s. 54-58 ISBN 978-80-86407-12-8. [Seminar on Numerical Analysis. Modelling and Simulation of Chalenging Engineering Problems. Winter School. High-performance and Parallel Computers, Programming Technologies & Numerical Linear Algebra. Ostrava (CZ), 22.01.2007-26.01.2007] R&D Projects: GA AV ČR 1ET400300415; GA MŠk 1N04035 Institutional research plan: CEZ:AV0Z30860518 Keywords : high performance computing * parallel programming * MPI Subject RIV: BA - General Mathematics

  9. Programmable automation systems in PSA

    International Nuclear Information System (INIS)

    The Finnish safety authority (STUK) requires plant specific PSAs, and quantitative safety goals are set on different levels. The reliability analysis is more problematic when critical safety functions are realized by applying programmable automation systems. Conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. However, it is important to analyze contribution of programmable automation systems to the plant safety and PSA is the only method with system analytical view over the safety. This report discusses the applicability of PSA methodology (fault tree analyses, failure modes and effects analyses) in the analysis of programmable automation systems. The problem of how to decompose programmable automation systems for reliability modeling purposes is discussed. In addition to the qualitative analysis and structural reliability modeling issues, the possibility to evaluate failure probabilities of programmable automation systems is considered. One solution to the quantification issue is the use of expert judgements, and the principles to apply expert judgements is discussed in the paper. A framework to apply expert judgements is outlined. Further, the impacts of subjective estimates on the interpretation of PSA results are discussed. (orig.) (13 refs.)

  10. An automated method for high-throughput protein purification applied to a comparison of His-tag and GST-tag affinity chromatography

    Directory of Open Access Journals (Sweden)

    Büssow Konrad

    2003-07-01

    Full Text Available Abstract Background Functional Genomics, the systematic characterisation of the functions of an organism's genes, includes the study of the gene products, the proteins. Such studies require methods to express and purify these proteins in a parallel, time and cost effective manner. Results We developed a method for parallel expression and purification of recombinant proteins with a hexahistidine tag (His-tag or glutathione S-transferase (GST-tag from bacterial expression systems. Proteins are expressed in 96-well microplates and are purified by a fully automated procedure on a pipetting robot. Up to 90 microgram purified protein can be obtained from 1 ml microplate cultures. The procedure is readily reproducible and 96 proteins can be purified in approximately three hours. It avoids clearing of crude cellular lysates and the use of magnetic affinity beads and is therefore less expensive than comparable commercial systems. We have used this method to compare purification of a set of human proteins via His-tag or GST-tag. Proteins were expressed as fusions to an N-terminal tandem His- and GST-tag and were purified by metal chelating or glutathione affinity chromatography. The purity of the obtained protein samples was similar, yet His-tag purification resulted in higher yields for some proteins. Conclusion A fully automated, robust and cost effective method was developed for the purification of proteins that can be used to quickly characterise expression clones in high throughput and to produce large numbers of proteins for functional studies. His-tag affinity purification was found to be more efficient than purification via GST-tag for some proteins.

  11. International Conference Automation : Challenges in Automation, Robotics and Measurement Techniques

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2016-01-01

    This book presents the set of papers accepted for presentation at the International Conference Automation, held in Warsaw, 2-4 March of 2016. It presents the research results presented by top experts in the fields of industrial automation, control, robotics and measurement techniques. Each chapter presents a thorough analysis of a specific technical problem which is usually followed by numerical analysis, simulation, and description of results of implementation of the solution of a real world problem. The presented theoretical results, practical solutions and guidelines will be valuable for both researchers working in the area of engineering sciences and for practitioners solving industrial problems. .

  12. Industrial cultures

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard

    1996-01-01

    The chapter deals with different paradigms andtheories of cultural development. The problem toexplain change and methods to analyse developmentin different cultures are presented and discussed.......The chapter deals with different paradigms andtheories of cultural development. The problem toexplain change and methods to analyse developmentin different cultures are presented and discussed....

  13. Culture matters.

    Science.gov (United States)

    Arif, Zeba

    Zebaa Arif reflects on changes during her career as a mental health nurse in relation to cultural care issues: Cultural awareness is becoming embedded in patient care. All aspects of care are influenced by cultural beliefs and should form part of assessment. Leadership is essential in influencing cultural care, as is organisational commitment. PMID:16262169

  14. CSP description of some parallel sorting algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Linck, M.H.

    1982-11-01

    Hoare's CSP notation is used to describe 3 parallel sorting algorithms. The first algorithm uses n/2 processes working in parallel, the second uses an array of n parallel processes and the third algorithm is a parallel version of quicksort. 12 references.

  15. Composition and Rhetoric Usage of Parallelism

    Institute of Scientific and Technical Information of China (English)

    祖林; 朱蕾

    2012-01-01

      Parallelism gets a rhetorical effect by means of syntactic approach. The use of parallelism makes the effect of balanced beauty between words and words, sentences and sentences, paragraphs and paragraphs. From the per-spective of semantic, all parts and components of parallelism are closely related, parallelism plays an important role in creating the rhetorical effect and strengthening the tone.

  16. Open architecture for multilingual parallel texts

    CERN Document Server

    Benitez, M T Carrasco

    2008-01-01

    Multilingual parallel texts (abbreviated to parallel texts) are linguistic versions of the same content ("translations"); e.g., the Maastricht Treaty in English and Spanish are parallel texts. This document is about creating an open architecture for the whole Authoring, Translation and Publishing Chain (ATP-chain) for the processing of parallel texts.

  17. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine;

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an...

  18. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  19. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  20. Einstein: Distant parallelism and electromagnetism

    Energy Technology Data Exchange (ETDEWEB)

    Israelit, M.; Rosen, N.

    1985-03-01

    Einstein's approach to unified field theories based on the geometry of distant parallelism is discussed. The simplest theory of this type, describing gravitation and electromagnetism, is investigated. It is found that there is a charge-current density vector associated with the geometry. However, in the static spherically symmetric case no singularity-free solutions for this vector exist.

  1. New Methodologies for Parallel Architecture

    Institute of Scientific and Technical Information of China (English)

    Dong-Rui Fan; Xiao-Wei Li; Guo-Jie Li

    2011-01-01

    Moore's law continues to grant computer architects ever more transistors in the foreseeable future,and parallelism is the key to continued performance scaling in modern microprocessors.In this paper,the achievements in our research project,which is supported by the National Basic Research 973 Program of China,on parallel architecture,are systematically presented.The innovative approaches and techniques to solve the significant problems in parallel architecture design are summarized,including architecture level optimization,compiler and languag~supported technologies,reliability,power-performance efficient design,test and verification challenges,and platform building.Two prototype chips,a multiheavy-core Godson-3 and a many-light-core Godson-T,are described to demonstrate the highly scalable and reconfigurable parallel architecture designs.We also present some of our achievements appearing in ISCA,MICRO,ISSCC,HPCA,PLDI,PACT,IJCAI,Hot Chips,DATE,IEEE Trans.VLSI,IEEE Micro,IEEE Trans.Computers,etc.

  2. Informativeness of Parallel Kalman Filters

    OpenAIRE

    Hajiyev, Chingiz

    2004-01-01

    This article considers the informativeness of parallel Kalman filters. Expressions are derived for determination of the amount of information obtained by additional measurements with a reserved measurement channel during processing. The theorems asserting that there is an increase in the informativeness of Kalman filters when there is a failure-free reserved measurement channel are proved.

  3. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  4. Lab on a chip automates in vitro cell culturing

    DEFF Research Database (Denmark)

    Perozziello, Gerardo; Møllenbach, Jacob; Laursen, Steen;

    2012-01-01

    A novel in vitro fertilization system is presented based on an incubation chamber and a microfluidic device which serves as advanced microfluidic cultivation chamber. The flow is controlled by hydrostatic height differences and evaporation is avoided with help of mineral oil. Six patient compartm...

  5. Automatic Loop Parallelization via Compiler Guided Refactoring

    DEFF Research Database (Denmark)

    Larsen, Per; Ladelsky, Razya; Lidman, Jacob;

    For many parallel applications, performance relies not on instruction-level parallelism, but on loop-level parallelism. Unfortunately, many modern applications are written in ways that obstruct automatic loop parallelization. Since we cannot identify sufficient parallelization opportunities for t...... the OpenMP code (within 75-111%). The second benchmark outperforms hand-parallelized and optimized OpenMP code (within 109-242%)....... be combined with target-specific optimizations. Furthermore, comparing the first benchmark to hand-parallelized, hand-optimized pthreads and OpenMP versions, we find that code generated using our approach typically outperforms the pthreads code (within 93-339%). It also performs competitively against...

  6. STAMPS: software tool for automated MRI post-processing on a supercomputer

    OpenAIRE

    Bigler, Don C.; Aksu, Yaman; Yang, Qing X.

    2009-01-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features....

  7. Manual versus automated blood sampling

    DEFF Research Database (Denmark)

    Teilmann, A C; Kalliokoski, Otto; Sørensen, Dorte B;

    2014-01-01

    corticosterone metabolites, and expressed more anxious behavior than did the mice of the other groups. Plasma corticosterone levels of mice subjected to tail blood sampling were also elevated, although less significantly. Mice subjected to automated blood sampling were less affected with regard to the parameters......Facial vein (cheek blood) and caudal vein (tail blood) phlebotomy are two commonly used techniques for obtaining blood samples from laboratory mice, while automated blood sampling through a permanent catheter is a relatively new technique in mice. The present study compared physiological parameters......, glucocorticoid dynamics as well as the behavior of mice sampled repeatedly for 24 h by cheek blood, tail blood or automated blood sampling from the carotid artery. Mice subjected to cheek blood sampling lost significantly more body weight, had elevated levels of plasma corticosterone, excreted more fecal...

  8. Unmet needs in automated cytogenetics

    International Nuclear Information System (INIS)

    Though some, at least, of the goals of automation systems for analysis of clinical cytogenetic material seem either at hand, like automatic metaphase finding, or at least likely to be met in the near future, like operator-assisted semi-automatic analysis of banded metaphase spreads, important areas of cytogenetic analsis, most importantly the determination of chromosomal aberration frequencies in populations of cells or in samples of cells from people exposed to environmental mutagens, await practical methods of automation. Important as are the clinical diagnostic applications, it is apparent that increasing concern over the clastogenic effects of the multitude of potentially clastogenic chemical and physical agents to which human populations are being increasingly exposed, and the resulting emergence of extensive cytogenetic testing protocols, makes the development of automation not only economically feasible but almost mandatory. The nature of the problems involved, and acutal of possible approaches to their solution, are discussed

  9. Network based automation for SMEs

    DEFF Research Database (Denmark)

    Shahabeddini Parizi, Mohammad; Radziwon, Agnieszka

    2016-01-01

    The implementation of appropriate automation concepts which increase productivity in Small and Medium Sized Enterprises (SMEs) requires a lot of effort, due to their limited resources. Therefore, it is strongly recommended for small firms to open up for the external sources of knowledge, which...... could be obtained through network interaction. Based on two extreme cases of SMEs representing low-tech industry and an in-depth analysis of their manufacturing facilities this paper presents how collaboration between firms embedded in a regional ecosystem could result in implementation of new...... other members of the same regional ecosystem. The findings highlight two main automation related areas where manufacturing SMEs could leverage on external sources on knowledge – these are assistance in defining automation problem as well as appropriate solution and provider selection. Consequently, this...

  10. Design automation for integrated circuits

    Science.gov (United States)

    Newell, S. B.; de Geus, A. J.; Rohrer, R. A.

    1983-04-01

    Consideration is given to the development status of the use of computers in automated integrated circuit design methods, which promise the minimization of both design time and design error incidence. Integrated circuit design encompasses two major tasks: error specification, in which the goal is a logic diagram that accurately represents the desired electronic function, and physical specification, in which the goal is an exact description of the physical locations of all circuit elements and their interconnections on the chip. Design automation not only saves money by reducing design and fabrication time, but also helps the community of systems and logic designers to work more innovatively. Attention is given to established design automation methodologies, programmable logic arrays, and design shortcuts.

  11. About the creation of a parallel bilingual corpora of web-publications

    OpenAIRE

    Lande, D. V.; Zhygalo, V. V.

    2008-01-01

    The algorithm of the creation texts parallel corpora was presented. The algorithm is based on the use of "key words" in text documents, and on the means of their automated translation. Key words were singled out by means of using Russian and Ukrainian morphological dictionaries, as well as dictionaries of the translation of nouns for the Russian and Ukrainianlanguages. Besides, to calculate the weights of the terms in the documents, empiric-statistic rules were used. The algorithm under consi...

  12. Parallel workflow tools to facilitate human brain MRI post-processing

    OpenAIRE

    Gaolang Gong

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computat...

  13. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  14. Use Computer-Aided Tools to Parallelize Large CFD Applications

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Yan, J.

    2000-01-01

    Porting applications to high performance parallel computers is always a challenging task. It is time consuming and costly. With rapid progressing in hardware architectures and increasing complexity of real applications in recent years, the problem becomes even more sever. Today, scalability and high performance are mostly involving handwritten parallel programs using message-passing libraries (e.g. MPI). However, this process is very difficult and often error-prone. The recent reemergence of shared memory parallel (SMP) architectures, such as the cache coherent Non-Uniform Memory Access (ccNUMA) architecture used in the SGI Origin 2000, show good prospects for scaling beyond hundreds of processors. Programming on an SMP is simplified by working in a globally accessible address space. The user can supply compiler directives, such as OpenMP, to parallelize the code. As an industry standard for portable implementation of parallel programs for SMPs, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran, C and C++ to express shared memory parallelism. It promises an incremental path for parallel conversion of existing software, as well as scalability and performance for a complete rewrite or an entirely new development. Perhaps the main disadvantage of programming with directives is that inserted directives may not necessarily enhance performance. In the worst cases, it can create erroneous results. While vendors have provided tools to perform error-checking and profiling, automation in directive insertion is very limited and often failed on large programs, primarily due to the lack of a thorough enough data dependence analysis. To overcome the deficiency, we have developed a toolkit, CAPO, to automatically insert OpenMP directives in Fortran programs and apply certain degrees of optimization. CAPO is aimed at taking advantage of detailed inter-procedural dependence analysis provided by CAPTools, developed by the University of

  15. A Survey of Parallel Data Mining

    OpenAIRE

    Freitas, Alex A

    1998-01-01

    With the fast, continuous increase in the number and size of databases, parallel data mining is a natural and cost-effective approach to tackle the problem of scalability in data mining. Recently there has been a considerable research on parallel data mining. However, most projects focus on the parallelization of a single kind of data mining algorithm/paradigm. This paper surveys parallel data mining with a broader perspective. More precisely, we discuss the parallelization of ...

  16. Automated synthesis of sialylated oligosaccharides

    Directory of Open Access Journals (Sweden)

    Davide Esposito

    2012-09-01

    Full Text Available Sialic acid-containing glycans play a major role in cell-surface interactions with external partners such as cells and viruses. Straightforward access to sialosides is required in order to study their biological functions on a molecular level. Here, automated oligosaccharide synthesis was used to facilitate the preparation of this class of biomolecules. Our strategy relies on novel sialyl α-(2→3 and α-(2→6 galactosyl imidates, which, used in combination with the automated platform, provided rapid access to a small library of conjugation-ready sialosides of biological relevance.

  17. Agile Data: Automating database refactorings

    Directory of Open Access Journals (Sweden)

    Bruno Xavier

    2014-09-01

    Full Text Available This paper discusses an automated approach to database change management throughout the companies’ development workflow. By using automated tools, companies can avoid common issues related to manual database deployments. This work was motivated by analyzing usual problems within organizations, mostly originated from manual interventions that may result in systems disruptions and production incidents. In addition to practices of continuous integration and continuous delivery, the current paper describes a case study in which a suggested pipeline is implemented in order to reduce the deployment times and decrease incidents due to ineffective data controlling.

  18. Automation system for experiment control

    International Nuclear Information System (INIS)

    An automated multi-level system designed for acquisition, accumulation, sorting and processing of information obtained in the course of an experiment is discussed. Intelligent terminals are established at each nuclear installation, which are interconnected with measuring equipment of the corresponding installation. The intelligent terminals operating in the interactive real-time mode permit to use through data links computing facilities and storage of the processing centre. On the top level of the automated system the third generation M4030 computer with 256 Kbyte internal memory is employed. The intelligent terminals are created on the basis of the Ryad-1010 and M-400 computers

  19. Status of automated tensile machine

    International Nuclear Information System (INIS)

    The objective of this work is to develop the Monbusho Automated Tensile machine (MATRON) and install and operate it at the Pacific Northwest Laboratory (PNL). The machine is designed to provide rapid, automated testing of irradiated miniature tensile specimen in a vacuum at elevated temperatures. The MATRON was successfully developed and shipped to PNL for installation in a hot facility. The original installation plan was modified to simplify the current and subsequent installations, and the installation was completed. Detailed procedures governing the operation of the system were written. Testing on irradiated miniature tensile specimen should begin in the near future

  20. Design automation, languages, and simulations

    CERN Document Server

    Chen, Wai-Kai

    2003-01-01

    As the complexity of electronic systems continues to increase, the micro-electronic industry depends upon automation and simulations to adapt quickly to market changes and new technologies. Compiled from chapters contributed to CRC's best-selling VLSI Handbook, this volume covers a broad range of topics relevant to design automation, languages, and simulations. These include a collaborative framework that coordinates distributed design activities through the Internet, an overview of the Verilog hardware description language and its use in a design environment, hardware/software co-design, syst

  1. Automated Podcasting System for Universities

    Directory of Open Access Journals (Sweden)

    Ypatios Grigoriadis

    2013-03-01

    Full Text Available This paper presents the results achieved at Graz University of Technology (TU Graz in the field of automating the process of recording and publishing university lectures in a very new way. It outlines cornerstones of the development and integration of an automated recording system such as the lecture hall setup, the recording hardware and software architecture as well as the development of a text-based search for the final product by method of indexing video podcasts. Furthermore, the paper takes a look at didactical aspects, evaluations done in this context and future outlook.

  2. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation.   Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  3. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation. Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  4. Toward designing for trust in database automation

    International Nuclear Information System (INIS)

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  5. Automated theorem proving theory and practice

    CERN Document Server

    Newborn, Monty

    2001-01-01

    As the 21st century begins, the power of our magical new tool and partner, the computer, is increasing at an astonishing rate. Computers that perform billions of operations per second are now commonplace. Multiprocessors with thousands of little computers - relatively little! -can now carry out parallel computations and solve problems in seconds that only a few years ago took days or months. Chess-playing programs are on an even footing with the world's best players. IBM's Deep Blue defeated world champion Garry Kasparov in a match several years ago. Increasingly computers are expected to be more intelligent, to reason, to be able to draw conclusions from given facts, or abstractly, to prove theorems-the subject of this book. Specifically, this book is about two theorem-proving programs, THEO and HERBY. The first four chapters contain introductory material about automated theorem proving and the two programs. This includes material on the language used to express theorems, predicate calculus, and the rules of...

  6. ORGANIZATIONAL CULTURE AND MANAGEMENT CULTURE

    OpenAIRE

    Tudor Hobeanu; Loredana Vacarescu Hobeanu

    2010-01-01

    Communication reveals the importance of organizational culture and management culture supported by the remarkable results in economic and social level of organization. Their functions are presented and specific ways of expression levels of organizational culture and ways of adapting to the requirements of the organization's management culture.

  7. Using CLIPS in the domain of knowledge-based massively parallel programming

    Science.gov (United States)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  8. Utilization of blood cultures in Danish hospitals

    DEFF Research Database (Denmark)

    Gubbels, S; Nielsen, J; Voldstedlund, M;

    2015-01-01

    This national population-based study was conducted as part of the development of a national automated surveillance system for hospital-acquired bacteraemia and ascertains the utilization of blood cultures (BCs). A primary objective was to understand how local differences may affect interpretation...

  9. Automation of cDNA Synthesis and Labelling Improves Reproducibility

    Directory of Open Access Journals (Sweden)

    Daniel Klevebring

    2009-01-01

    Full Text Available Background. Several technologies, such as in-depth sequencing and microarrays, enable large-scale interrogation of genomes and transcriptomes. In this study, we asses reproducibility and throughput by moving all laboratory procedures to a robotic workstation, capable of handling superparamagnetic beads. Here, we describe a fully automated procedure for cDNA synthesis and labelling for microarrays, where the purification steps prior to and after labelling are based on precipitation of DNA on carboxylic acid-coated paramagnetic beads. Results. The fully automated procedure allows for samples arrayed on a microtiter plate to be processed in parallel without manual intervention and ensuring high reproducibility. We compare our results to a manual sample preparation procedure and, in addition, use a comprehensive reference dataset to show that the protocol described performs better than similar manual procedures. Conclusions. We demonstrate, in an automated gene expression microarray experiment, a reduced variance between replicates, resulting in an increase in the statistical power to detect differentially expressed genes, thus allowing smaller differences between samples to be identified. This protocol can with minor modifications be used to create cDNA libraries for other applications such as in-depth analysis using next-generation sequencing technologies.

  10. Improving automated load flexibility of nuclear power plants with ALFC

    Energy Technology Data Exchange (ETDEWEB)

    Kuhn, Andreas [AREVA GmbH, Karlstein (Germany). Plant Control/Training; Klaus, Peter [E.ON NPP Isar 2, Essenbach (Germany). Plant Operation/Production Engineering

    2016-07-01

    In several German and Swiss Nuclear Power Plants with Pressurized Water Reactor (PWR) the control of the reactor power was and will be improved in order to be able to support the energy transition with increasing volatile renewable energy in the grid by flexible load operation according to the need of the load dispatcher (power system stability). Especially regarding the mentioned German NPPs with a nominal electric power of approx. 1,500 MW, the general objectives are the main automated grid relevant operation modes. The new possibilities of digital I and C (as TELEPERM {sup registered} XS) enable the automation of the operating modes provided that manual support is no longer necessary. These possibilities were and will be implemented by AREVA within the ALFC-projects. Manifold adaption algorithms to the reactor physical variations during the nuclear load cycle enable a precise control of the axial power density distribution and of the reactivity management in the reactor core. Finally this is the basis for a highly automated load flexibility with the parallel respect and surveillance of the operational limits of a PWR.

  11. Improving automated load flexibility of nuclear power plants with ALFC

    International Nuclear Information System (INIS)

    In several German and Swiss Nuclear Power Plants with Pressurized Water Reactor (PWR) the control of the reactor power was and will be improved in order to be able to support the energy transition with increasing volatile renewable energy in the grid by flexible load operation according to the need of the load dispatcher (power system stability). Especially regarding the mentioned German NPPs with a nominal electric power of approx. 1,500 MW, the general objectives are the main automated grid relevant operation modes. The new possibilities of digital I and C (as TELEPERM registered XS) enable the automation of the operating modes provided that manual support is no longer necessary. These possibilities were and will be implemented by AREVA within the ALFC-projects. Manifold adaption algorithms to the reactor physical variations during the nuclear load cycle enable a precise control of the axial power density distribution and of the reactivity management in the reactor core. Finally this is the basis for a highly automated load flexibility with the parallel respect and surveillance of the operational limits of a PWR.

  12. Automated Ply Inspection (API) for AFP Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Automated Ply Inspection (API) system autonomously inspects layups created by high speed automated fiber placement (AFP) machines. API comprises a high accuracy...

  13. Laboratory automation and LIMS in forensics

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hansen, Anders Johannes; Morling, Niels

    2013-01-01

    Implementation of laboratory automation and LIMS in a forensic laboratory enables the laboratory, to standardize sample processing. Automated liquid handlers can increase throughput and eliminate manual repetitive pipetting operations, known to result in occupational injuries to the technical staff...

  14. Cultural commons and cultural evolution

    OpenAIRE

    Giangiacomo Bravo

    2010-01-01

    Culture evolves following a process that is akin to biological evolution, although with some significant differences. At the same time culture has often a collective good value for human groups. This paper studies culture in an evolutionary perspective, with a focus on the implications of group definition for the coexistence of different cultures. A model of cultural evolution is presented where agents interacts in an artificial environment. The belonging to a specific memetic group is a majo...

  15. Towards Automated System Synthesis Using SCIDUCTION

    OpenAIRE

    Jha, Susmit Kumar

    2011-01-01

    Automated synthesis of systems that are correct by construction has been a long-standing goal of computer science. Synthesis is a creative task and requires human intuition and skill. Its complete automation is currently beyond the capacity of programs that do automated reasoning. However, there is a pressing need for tools and techniques that can automate non-intuitive and error-prone synthesis tasks. This thesis proposes a novel synthesis approach to solve such tasks in the synthesis of pro...

  16. GUI test automation for Qt application

    OpenAIRE

    Wang, Lei

    2015-01-01

    GUI test automation is a popular and interesting subject in the testing industry. Many companies plan to start test automation projects in order to implement efficient, less expensive software testing. However, there are challenges for the testing team who lack experience performing GUI tests automation. Many GUI test automation projects have ended in failure due to mistakes made during the early stages of the project. The major work of this thesis is to find a solution to the challenges of e...

  17. Automated Integrated Analog Filter Design Issues

    OpenAIRE

    Karolis Kiela; Romualdas Navickas

    2015-01-01

    An analysis of modern automated integrated analog circuits design methods and their use in integrated filter design is done. Current modern analog circuits automated tools are based on optimization algorithms and/or new circuit generation methods. Most automated integrated filter design methods are only suited to gmC and switched current filter topologies. Here, an algorithm for an active RC integrated filter design is proposed, that can be used in automated filter designs. The algorithm is t...

  18. Automated activation-analysis system

    International Nuclear Information System (INIS)

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey are described

  19. Automated Analysis of Infinite Scenarios

    DEFF Research Database (Denmark)

    Buchholtz, Mikael

    The security of a network protocol crucially relies on the scenario in which the protocol is deployed. This paper describes syntactic constructs for modelling network scenarios and presents an automated analysis tool, which can guarantee that security properties hold in all of the (infinitely many...

  20. Automated visual inspection of textile

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    A method for automated inspection of two types of textile is presented. The goal of the inspection is to determine defects in the textile. A prototype is constructed for simulating the textile production line. At the prototype the images of the textile are acquired by a high speed line scan camera...

  1. Automation of Space Inventory Management

    Science.gov (United States)

    Fink, Patrick W.; Ngo, Phong; Wagner, Raymond; Barton, Richard; Gifford, Kevin

    2009-01-01

    This viewgraph presentation describes the utilization of automated space-based inventory management through handheld RFID readers and BioNet Middleware. The contents include: 1) Space-Based INventory Management; 2) Real-Time RFID Location and Tracking; 3) Surface Acoustic Wave (SAW) RFID; and 4) BioNet Middleware.

  2. Distribution system analysis and automation

    CERN Document Server

    Gers, Juan

    2013-01-01

    A comprehensive guide to techniques that allow engineers to simulate, analyse and optimise power distribution systems which combined with automation, underpin the emerging concept of the "smart grid". This book is supported by theoretical concepts with real-world applications and MATLAB exercises.

  3. Automating the conflict resolution process

    Science.gov (United States)

    Wike, Jeffrey S.

    1991-01-01

    The purpose is to initiate a discussion of how the conflict resolution process at the Network Control Center can be made more efficient. Described here are how resource conflicts are currently resolved as well as the impacts of automating conflict resolution in the ATDRSS era. A variety of conflict resolution strategies are presented.

  4. Automated Clustering of Similar Amendments

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The Italian Senate is clogged by computer-generated amendments. This talk will describe a simple strategy to cluster them in an automated fashion, so that the appropriate Senate procedures can be used to get rid of them in one sweep.

  5. Feasibility Analysis of Crane Automation

    Institute of Scientific and Technical Information of China (English)

    DONG Ming-xiao; MEI Xue-song; JIANG Ge-dong; ZHANG Gui-qing

    2006-01-01

    This paper summarizes the modeling methods, open-loop control and closed-loop control techniques of various forms of cranes, worldwide, and discusses their feasibilities and limitations in engineering. Then the dynamic behaviors of cranes are analyzed. Finally, we propose applied modeling methods and feasible control techniques and demonstrate the feasibilities of crane automation.

  6. Automation; The New Industrial Revolution.

    Science.gov (United States)

    Arnstein, George E.

    Automation is a word that describes the workings of computers and the innovations of automatic transfer machines in the factory. As the hallmark of the new industrial revolution, computers displace workers and create a need for new skills and retraining programs. With improved communication between industry and the educational community to…

  7. Automation of Feynman diagram evaluations

    International Nuclear Information System (INIS)

    A C-program DIANA (DIagram ANAlyser) for the automation of Feynman diagram evaluations is presented. It consists of two parts: the analyzer of diagrams and the interpreter of a special text manipulating language. This language can be used to create a source code for analytical or numerical evaluations and to keep the control of the process in general

  8. Automated methods of corrosion measurement

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov; Bech-Nielsen, Gregers; Reeve, John Ch;

    1997-01-01

    to revise assumptions regarding the basis of the method, which sometimes leads to the discovery of as-yet unnoticed phenomena. The present selection of automated methods for corrosion measurements is not motivated simply by the fact that a certain measurement can be performed automatically...

  9. Teacherbot: Interventions in Automated Teaching

    Science.gov (United States)

    Bayne, Sian

    2015-01-01

    Promises of "teacher-light" tuition and of enhanced "efficiency" via the automation of teaching have been with us since the early days of digital education, sometimes embraced by academics and institutions, and sometimes resisted as a set of moves which are damaging to teacher professionalism and to the humanistic values of…

  10. Automation, Labor Productivity and Employment

    DEFF Research Database (Denmark)

    Kromann, Lene; Rose Skaksen, Jan; Sørensen, Anders

    CEBR fremlægger nu den første rapport i AIM-projektet. Rapporten viser, at der er gode muligheder for yderligere automation i en stor del af de danske fremstillingsvirksomheder. For i dag er gennemsnitligt kun omkring 30 % af virksomhedernes produktionsprocesser automatiserede. Navnlig procesområ...

  11. CCD characterization and measurements automation

    Czech Academy of Sciences Publication Activity Database

    Kotov, I.V.; Frank, J.; Kotov, A.I.; Kubánek, Petr; O´Connor, P.; Prouza, Michael; Radeka, V.; Takacs, P.

    2012-01-01

    Roč. 695, Dec (2012), 188-192. ISSN 0168-9002 R&D Projects: GA MŠk ME09052 Institutional research plan: CEZ:AV0Z10100502 Keywords : CCD * characterization * test automation Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.142, year: 2012

  12. Adaptation : A Partially Automated Approach

    NARCIS (Netherlands)

    Manjing, Tham; Bukhsh, F.A.; Weigand, H.

    2014-01-01

    This paper showcases the possibility of creating an adaptive auditing system. Adaptation in an audit environment need human intervention at some point. Based on a case study this paper focuses on automation of adaptation process. It is divided into solution design and validation parts. The artifact

  13. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Automation, the removal of the human element in inspection has not been generally applied to film radiographic NDT. The justification for automation is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 3x10 (to the power of nine) bits per 14x17. This is equivalent to 2200 computer floppy disks parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, a computer aided interpretation appears on the horizon. A unit which laser scans a 14x27 (inch) film in 6-8 seconds can digitize film in information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for film digital radiography system) is moving toward 50 micron (16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. (Author). 4 refs.; 21 figs

  14. Two Level Parallel Grammatical Evolution

    Science.gov (United States)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  15. Parallel supercomputing with commodity components

    Energy Technology Data Exchange (ETDEWEB)

    Warren, M.S.; Goda, M.P. [Los Alamos National Lab., NM (United States); Becker, D.J. [Goddard Space Flight Center, Greenbelt, MD (United States)] [and others

    1997-09-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  16. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  17. Automating Shallow Seismic Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Steeples, Don W.

    2004-12-09

    This seven-year, shallow-seismic reflection research project had the aim of improving geophysical imaging of possible contaminant flow paths. Thousands of chemically contaminated sites exist in the United States, including at least 3,700 at Department of Energy (DOE) facilities. Imaging technologies such as shallow seismic reflection (SSR) and ground-penetrating radar (GPR) sometimes are capable of identifying geologic conditions that might indicate preferential contaminant-flow paths. Historically, SSR has been used very little at depths shallower than 30 m, and even more rarely at depths of 10 m or less. Conversely, GPR is rarely useful at depths greater than 10 m, especially in areas where clay or other electrically conductive materials are present near the surface. Efforts to image the cone of depression around a pumping well using seismic methods were only partially successful (for complete references of all research results, see the full Final Technical Report, DOE/ER/14826-F), but peripheral results included development of SSR methods for depths shallower than one meter, a depth range that had not been achieved before. Imaging at such shallow depths, however, requires geophone intervals of the order of 10 cm or less, which makes such surveys very expensive in terms of human time and effort. We also showed that SSR and GPR could be used in a complementary fashion to image the same volume of earth at very shallow depths. The primary research focus of the second three-year period of funding was to develop and demonstrate an automated method of conducting two-dimensional (2D) shallow-seismic surveys with the goal of saving time, effort, and money. Tests involving the second generation of the hydraulic geophone-planting device dubbed the ''Autojuggie'' showed that large numbers of geophones can be placed quickly and automatically and can acquire high-quality data, although not under rough topographic conditions. In some easy

  18. Lightweight Specifications for Parallel Correctness

    OpenAIRE

    Burnim, Jacob Samuels

    2012-01-01

    With the spread of multicore processors, it is increasingly necessaryfor programmers to write parallel software. Yet writing correctparallel software with explicit multithreading remains a difficultundertaking. Though many tools exist to help test, debug, and verifyparallel programs, such tools are often hindered by a lack of anyspecification from the programmer of the intended, correct parallelbehavior of his or her software.In this dissertation, we propose three novel lightweightspecificati...

  19. Efficient Parallel Programming with Linda

    OpenAIRE

    Ashish Deshpande; Martin Schultz

    1992-01-01

    Linda is a coordination language inverted by David Gelernter at Yale University, which when combined with a computation language (like C) yields a high-level parallel programming language for MIMD machines. Linda is based on a virtual shared associative memory containing objects called tuples. Skeptics have long claimed that Linda programs could not be efficient on distributed memory architectures. In this paper, we address this claim by discussing C-Linda's performance in solving a particula...

  20. Shot noise in parallel wires

    OpenAIRE

    Lagerqvist, Johan; Chen, Yu-Chang; Di Ventra, Massimiliano

    2004-01-01

    We report first-principles calculations of shot noise properties of parallel carbon wires in the regime in which the interwire distance is much smaller than the inelastic mean free path. We find that, with increasing interwire distance, the current approaches rapidly a value close to twice the current of each wire, while the Fano factor, for the same distances, is still larger than the Fano factor of a single wire. This enhanced Fano factor is the signature of the correlation between electron...

  1. A tandem parallel plate analyzer

    International Nuclear Information System (INIS)

    By a new modification of a parallel plate analyzer the second-order focus is obtained in an arbitrary injection angle. This kind of an analyzer with a small injection angle will have an advantage of small operational voltage, compared to the Proca and Green analyzer where the injection angle is 30 degrees. Thus, the newly proposed analyzer will be very useful for the precise energy measurement of high energy particles in MeV range. (author)

  2. Mirror versus parallel bimanual reaching

    OpenAIRE

    Abdollahi, Farnaz; Kenyon, Robert V.; Patton, James L.

    2013-01-01

    Background In spite of their importance to everyday function, tasks that require both hands to work together such as lifting and carrying large objects have not been well studied and the full potential of how new technology might facilitate recovery remains unknown. Methods To help identify the best modes for self-teleoperated bimanual training, we used an advanced haptic/graphic environment to compare several modes of practice. In a 2-by-2 study, we compared mirror vs. parallel reaching move...

  3. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  4. Parallel implementation of multilevel BDDC

    Czech Academy of Sciences Publication Activity Database

    Šístek, Jakub; Mandel, J.; Sousedík, B.; Burda, P.

    Berlin : Springer, 2013 - (Cangiani, A.), s. 681-689 ISBN 978-3-642-33133-6. [ENUMATH 2011 - European Conference on Numerical Mathematics and Advanced Applications /9./. Leicester (GB), 05.09.2011-09.09.2011] R&D Projects: GA AV ČR IAA100760702 Institutional support: RVO:67985840 Keywords : parallel algorithms * domain decomposition * iterative substructuring Subject RIV: BA - General Mathematics http://link.springer.com/chapter/10.1007/978-3-642-33134-3_72

  5. Efficient, massively parallel eigenvalue computation

    Science.gov (United States)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  6. Parallel Computing in Multicore Architecture

    OpenAIRE

    Taslima Yeasmin

    2014-01-01

    Many Applications are ranges from signal processing to astronomy includes FFT (Fast Fouries Transform) to increase the speed of computation. The computational demands of software continue to outpace the capabilities of processor and memory technologies especially in scientific and engineering programs. The parallel FFT computation offers a new advancement to increase the speed of computation is the objective of this research. This research deals with various approaches for FFT...

  7. Distributed and Parallel Component Library

    Institute of Scientific and Technical Information of China (English)

    XU Zheng-quan; XU Yang; YAN Ai-ping

    2005-01-01

    Software component library is the essential part of reuse-based software development. It is shown that making use of a single component library to store all kinds of components and from which components are searched is very inefficient. We construct multi-libraries to support software reuse and use PVM as development environments to imitate largescale computer, which is expected to fulfill distributed storage and parallel search of components efficiently and improve software reuse.

  8. You're a What? Automation Technician

    Science.gov (United States)

    Mullins, John

    2010-01-01

    Many people think of automation as laborsaving technology, but it sure keeps Jim Duffell busy. Defined simply, automation is a technique for making a device run or a process occur with minimal direct human intervention. But the functions and technologies involved in automated manufacturing are complex. Nearly all functions, from orders coming in…

  9. Library Automation : Change for Productivity in Service

    OpenAIRE

    M.A. Gopinath

    1995-01-01

    Library operations, in the context of the automation necessitate for an integrated approach and change in the perception of library's work. It examines the functional aspects, social aspects and system dynamics of library automation. Some strategies for library automation are suggested.

  10. Easy and Effective Parallel Programmable ETL

    DEFF Research Database (Denmark)

    Thomsen, Christian; Pedersen, Torben Bach

    2011-01-01

    , typically the case that the ETL program can exploit both task parallelism and data parallelism to run faster. This, on the other hand, makes the development time longer as it is complex to create a parallel ETL program. To remedy this situation, we propose efficient ways to parallelize typical ETL tasks...... and we implement these new constructs in an ETL framework. The constructs are easy to apply and do only require few modifications to an ETL program to parallelize it. They support both task and data parallelism and give the programmer different possibilities to choose from. An experimental evaluation...

  11. Automatic Performance Debugging of SPMD-style Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Kunlin; Shi, Weisong; Yuan, Lin; Meng, Dan; Wang, Lei

    2011-01-01

    The simple program and multiple data (SPMD) programming model is widely used for both high performance computing and Cloud computing. In this paper, we design and implement an innovative system, AutoAnalyzer, that automates the process of debugging performance problems of SPMD-style parallel programs, including data collection, performance behavior analysis, locating bottlenecks, and uncovering their root causes. AutoAnalyzer is unique in terms of two features: first, without any apriori knowledge, it automatically locates bottlenecks and uncovers their root causes for performance optimization; second, it is lightweight in terms of the size of performance data to be collected and analyzed. Our contributions are three-fold: first, we propose two effective clustering algorithms to investigate the existence of performance bottlenecks that cause process behavior dissimilarity or code region behavior disparity, respectively; meanwhile, we present two searching algorithms to locate bottlenecks; second, on a basis o...

  12. Throat Culture

    Science.gov (United States)

    ... limited. Home Visit Global Sites Search Help? Throat Culture Share this page: Was this page helpful? Collecting | ... treatment | Getting results | see BLOOD SAMPLE Collecting A culture is a test that is often used to ...

  13. Repellent Culture.

    Science.gov (United States)

    Carroll, Jeffrey

    2001-01-01

    Considers defining "culture," noting how it is difficult to define because those individuals defining it cannot separate themselves from it. Relates these issues to student writing and their writing improvement. Addresses violence in relation to culture. (SG)

  14. Culturing Protozoa.

    Science.gov (United States)

    Stevenson, Paul

    1980-01-01

    Compares various nutrient media, growth conditions, and stock solutions used in culturing protozoa. A hay infusion in Chalkey's solution maintained at a stable temperature is recommended for producing the most dense and diverse cultures. (WB)

  15. Automated target recognition and tracking using an optical pattern recognition neural network

    Science.gov (United States)

    Chao, Tien-Hsin

    1991-01-01

    The on-going development of an automatic target recognition and tracking system at the Jet Propulsion Laboratory is presented. This system is an optical pattern recognition neural network (OPRNN) that is an integration of an innovative optical parallel processor and a feature extraction based neural net training algorithm. The parallel optical processor provides high speed and vast parallelism as well as full shift invariance. The neural network algorithm enables simultaneous discrimination of multiple noisy targets in spite of their scales, rotations, perspectives, and various deformations. This fully developed OPRNN system can be effectively utilized for the automated spacecraft recognition and tracking that will lead to success in the Automated Rendezvous and Capture (AR&C) of the unmanned Cargo Transfer Vehicle (CTV). One of the most powerful optical parallel processors for automatic target recognition is the multichannel correlator. With the inherent advantages of parallel processing capability and shift invariance, multiple objects can be simultaneously recognized and tracked using this multichannel correlator. This target tracking capability can be greatly enhanced by utilizing a powerful feature extraction based neural network training algorithm such as the neocognitron. The OPRNN, currently under investigation at JPL, is constructed with an optical multichannel correlator where holographic filters have been prepared using the neocognitron training algorithm. The computation speed of the neocognitron-type OPRNN is up to 10(exp 14) analog connections/sec that enabling the OPRNN to outperform its state-of-the-art electronics counterpart by at least two orders of magnitude.

  16. The Experience of Large Computational Programs Parallelization. Parallel Version of MINUIT Program

    CERN Document Server

    Sapozhnikov, A P

    2003-01-01

    The problems around large computational programs parallelization are discussed. As an example a parallel version of MINUIT, widely used program for minimization, is introduced. The given results of MPI-based MINUIT testing on multiprocessor system demonstrate really reached parallelism.

  17. Prevalence of discordant microscopic changes with automated CBC analysis

    Directory of Open Access Journals (Sweden)

    Fabiano de Jesus Santos

    2014-12-01

    Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.

  18. Corporate culture

    OpenAIRE

    Stoklasa, Pavel

    2010-01-01

    The theme of the bachelor's thesis is corporate culture, that is currently becoming a very important part of every company. In the theoretical part provides views of the individual authors on this issue. Here are also explained important concepts related to this topic. In particular, the elements of corporate culture, corporate culture change and determinants, that affect it. Furthermore, the theoretical part describes the most famous typology of corporate cultures international authors. The ...

  19. Cultural Neuroscience

    OpenAIRE

    Ames, Daniel L.; Fiske, Susan T.

    2010-01-01

    Cultural neuroscience issues from the apparently incompatible combination of neuroscience and cultural psychology. A brief literature sampling suggests, instead, several preliminary topics that demonstrate proof of possibilities: cultural differences in both lower-level processes (e.g. perception, number representation) and higher-order processes (e.g. inferring others’ emotions, contemplating the self) are beginning to shed new light on both culture and cognition. Candidates for future cultu...

  20. Corporate culture

    OpenAIRE

    Brodská, Monika

    2008-01-01

    This bachelor thesis deals with the concept of corporate culture. The theoretical part is focused on the explanation of the concept of corporate culture, its material and immaterial components, characteristics and typologies of selected authors. The practical part of my work is devoted to research the statement of corporate culture in the selected Belarusian corporation and to create recommendations for the improvements of corporate culture, which is the contribution of my thesis. During the ...