WorldWideScience

Sample records for automated parallel cultures

  1. Parallel symbolic execution for automated real-world software testing

    OpenAIRE

    Bucur, Stefan; Ureche, Vlad; Zamfir, Cristian; Candea, George

    2011-01-01

    This paper introduces Cloud9, a platform for automated testing of real-world software. Our main contribution is the scalable parallelization of symbolic execution on clusters of commodity hardware, to help cope with path explosion. Cloud9 provides a systematic interface for writing "symbolic tests" that concisely specify entire families of inputs and behaviors to be tested, thus improving testing productivity. Cloud9 can handle not only single-threaded programs but also multi-threaded and dis...

  2. Toward an automated parallel computing environment for geosciences

    Science.gov (United States)

    Zhang, Huai; Liu, Mian; Shi, Yaolin; Yuen, David A.; Yan, Zhenzhen; Liang, Guoping

    2007-08-01

    Software for geodynamic modeling has not kept up with the fast growing computing hardware and network resources. In the past decade supercomputing power has become available to most researchers in the form of affordable Beowulf clusters and other parallel computer platforms. However, to take full advantage of such computing power requires developing parallel algorithms and associated software, a task that is often too daunting for geoscience modelers whose main expertise is in geosciences. We introduce here an automated parallel computing environment built on open-source algorithms and libraries. Users interact with this computing environment by specifying the partial differential equations, solvers, and model-specific properties using an English-like modeling language in the input files. The system then automatically generates the finite element codes that can be run on distributed or shared memory parallel machines. This system is dynamic and flexible, allowing users to address different problems in geosciences. It is capable of providing web-based services, enabling users to generate source codes online. This unique feature will facilitate high-performance computing to be integrated with distributed data grids in the emerging cyber-infrastructures for geosciences. In this paper we discuss the principles of this automated modeling environment and provide examples to demonstrate its versatility.

  3. Automation in laser scanning for cultural heritage applications

    OpenAIRE

    Böhm, Jan; Haala, Norbert; Alshawabkeh, Yahya

    2005-01-01

    Within the paper we present the current activities of the Institute for Photogrammetry in cultural heritage documentation in Jordan. In particular two sites, Petra and Jerash, were recorded using terrestrial laser scanning (TLS). We present the results and the current status of the recording. Experiences drawn from these projects have led us to investigate more automated approaches to TLS data processing. We detail two approaches within this work. The automation of georeferencing for TLS data...

  4. Automated Long-Term Monitoring of Parallel Microfluidic Operations Applying a Machine Vision-Assisted Positioning Method

    OpenAIRE

    Hon Ming Yip; John C. S. Li; Kai Xie; Xin Cui; Agrim Prasad; Qiannan Gao; Chi Chiu Leung; Lam, Raymond H. W.

    2014-01-01

    As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet...

  5. Automated cantilever exchange and optical alignment for High-throughput, parallel atomic force microscopy

    CERN Document Server

    Bijnagte, Tom; Kramer, Lukas; Dekker, Bert; Herfst, Rodolf; Sadeghian, Hamed

    2016-01-01

    In atomic force microscopy (AFM), the exchange and alignment of the AFM cantilever with respect to the optical beam and position-sensitive detector (PSD) are often performed manually. This process is tedious and time-consuming and sometimes damages the cantilever or tip. To increase the throughput of AFM in industrial applications, the ability to automatically exchange and align the cantilever in a very short time with sufficient accuracy is required. In this paper, we present the development of an automated cantilever exchange and optical alignment instrument. We present an experimental proof of principle by exchanging various types of AFM cantilevers in 6 seconds with an accuracy better than 2 um. The exchange and alignment unit is miniaturized to allow for integration in a parallel AFM. The reliability of the demonstrator has also been evaluated. Ten thousand continuous exchange and alignment cycles were performed without failure. The automated exchange and alignment of the AFM cantilever overcome a large ...

  6. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    Science.gov (United States)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less

  7. DATA TRANSFER IN THE AUTOMATED SYSTEM OF PARALLEL DESIGN AND CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Volkov Andrey Anatol'evich

    2012-12-01

    Full Text Available This article covers data transfer processes in the automated system of parallel design and construction. The authors consider the structure of reports used by contractors and clients when large-scale projects are implemented. All necessary items of information are grouped into three levels, and each level is described by certain attributes. The authors drive a lot of attention to the integrated operational schedule as it is the main tool of project management. Some recommendations concerning the forms and the content of reports are presented. Integrated automation of all operations is a necessary condition for the successful implementation of the new concept. The technical aspect of the notion of parallel design and construction also includes the client-to-server infrastructure that brings together all process implemented by the parties involved into projects. This approach should be taken into consideration in the course of review of existing codes and standards to eliminate any inconsistency between the construction legislation and the practical experience of engineers involved into the process.

  8. Digital microfluidics for automated hanging drop cell spheroid culture.

    Science.gov (United States)

    Aijian, Andrew P; Garrell, Robin L

    2015-06-01

    Cell spheroids are multicellular aggregates, grown in vitro, that mimic the three-dimensional morphology of physiological tissues. Although there are numerous benefits to using spheroids in cell-based assays, the adoption of spheroids in routine biomedical research has been limited, in part, by the tedious workflow associated with spheroid formation and analysis. Here we describe a digital microfluidic platform that has been developed to automate liquid-handling protocols for the formation, maintenance, and analysis of multicellular spheroids in hanging drop culture. We show that droplets of liquid can be added to and extracted from through-holes, or "wells," and fabricated in the bottom plate of a digital microfluidic device, enabling the formation and assaying of hanging drops. Using this digital microfluidic platform, spheroids of mouse mesenchymal stem cells were formed and maintained in situ for 72 h, exhibiting good viability (>90%) and size uniformity (% coefficient of variation digital microfluidic platform provides a viable tool for automating cell spheroid culture and analysis. PMID:25510471

  9. Automated integration of genomic physical mapping data via parallel simulated annealing

    Energy Technology Data Exchange (ETDEWEB)

    Slezak, T.

    1994-06-01

    The Human Genome Center at the Lawrence Livermore National Laboratory (LLNL) is nearing closure on a high-resolution physical map of human chromosome 19. We have build automated tools to assemble 15,000 fingerprinted cosmid clones into 800 contigs with minimal spanning paths identified. These islands are being ordered, oriented, and spanned by a variety of other techniques including: Fluorescence Insitu Hybridization (FISH) at 3 levels of resolution, ECO restriction fragment mapping across all contigs, and a multitude of different hybridization and PCR techniques to link cosmid, YAC, AC, PAC, and Pl clones. The FISH data provide us with partial order and distance data as well as orientation. We made the observation that map builders need a much rougher presentation of data than do map readers; the former wish to see raw data since these can expose errors or interesting biology. We further noted that by ignoring our length and distance data we could simplify our problem into one that could be readily attacked with optimization techniques. The data integration problem could then be seen as an M x N ordering of our N cosmid clones which ``intersect`` M larger objects by defining ``intersection`` to mean either contig/map membership or hybridization results. Clearly, the goal of making an integrated map is now to rearrange the N cosmid clone ``columns`` such that the number of gaps on the object ``rows`` are minimized. Our FISH partially-ordered cosmid clones provide us with a set of constraints that cannot be violated by the rearrangement process. We solved the optimization problem via simulated annealing performed on a network of 40+ Unix machines in parallel, using a server/client model built on explicit socket calls. For current maps we can create a map in about 4 hours on the parallel net versus 4+ days on a single workstation. Our biologists are now using this software on a daily basis to guide their efforts toward final closure.

  10. An Extended Case Study Methoology for Investigating Influence of Cultural, Organizational, and Automation Factors on Human-Automation Trust

    Science.gov (United States)

    Koltai, Kolina Sun; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Johnson, Walter; Cacanindin, Artemio

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Forces newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the cases politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerabilityhigh risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  11. Automated regenerable microarray-based immunoassay for rapid parallel quantification of mycotoxins in cereals.

    Science.gov (United States)

    Oswald, S; Karsunke, X Y Z; Dietrich, R; Märtlbauer, E; Niessner, R; Knopp, D

    2013-08-01

    An automated flow-through multi-mycotoxin immunoassay using the stand-alone Munich Chip Reader 3 platform and reusable biochips was developed and evaluated. This technology combines a unique microarray, prepared by covalent immobilization of target analytes or derivatives on diamino-poly(ethylene glycol) functionalized glass slides, with a dedicated chemiluminescence readout by a CCD camera. In a first stage, we aimed for the parallel detection of aflatoxins, ochratoxin A, deoxynivalenol, and fumonisins in cereal samples in a competitive indirect immunoassay format. The method combines sample extraction with methanol/water (80:20, v/v), extract filtration and dilution, and immunodetection using horseradish peroxidase-labeled anti-mouse IgG antibodies. The total analysis time, including extraction, extract dilution, measurement, and surface regeneration, was 19 min. The prepared microarray chip was reusable for at least 50 times. Oat extract revealed itself as a representative sample matrix for preparation of mycotoxin standards and determination of different types of cereals such as oat, wheat, rye, and maize polenta at relevant concentrations according to the European Commission regulation. The recovery rates of fortified samples in different matrices, with 55-80 and 58-79%, were lower for the better water-soluble fumonisin B1 and deoxynivalenol and with 127-132 and 82-120% higher for the more unpolar aflatoxins and ochratoxin A, respectively. Finally, the results of wheat samples which were naturally contaminated with deoxynivalenol were critically compared in an interlaboratory comparison with data obtained from microtiter plate ELISA, aokinmycontrol® method, and liquid chromatography-mass spectrometry and found to be in good agreement. PMID:23620369

  12. Miniaturized Mass-Spectrometry-Based Analysis System for Fully Automated Examination of Conditioned Cell Culture Media

    NARCIS (Netherlands)

    Weber, E.; Pinkse, M.W.H.; Bener-Aksam, E.; Vellekoop, M.J.; Verhaert, P.D.E.M.

    2012-01-01

    We present a fully automated setup for performing in-line mass spectrometry (MS) analysis of conditioned media in cell cultures, in particular focusing on the peptides therein. The goal is to assess peptides secreted by cells in different culture conditions. The developed system is compatible with M

  13. Parallel worlds : art and sport in contemporary culture

    OpenAIRE

    Tainio, Matti

    2015-01-01

    This research maps the relationships between art and sport through various perspectives using a multidisciplinary approach. In addition, three artistic projects have been included in the research. The research produces a reasoned proposition why art and sport should be seen similar practices in contemporary culture and why this perspective is beneficial. In everyday view art and sport seem opposite cultural practices, but by adopting an appropriate view similarities can be detected. In ord...

  14. Automated Detection of Soma Location and Morphology in Neuronal Network Cultures

    OpenAIRE

    Burcin Ozcan; Pooran Negi; Fernanda Laezza; Manos Papadakis; Demetrio Labate

    2015-01-01

    Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS), where the extraction of multiple morphological features of neurons ...

  15. Impact of Implementation of an Automated Liquid Culture System on Diagnosis of Tuberculous Pleurisy

    OpenAIRE

    Lee, Byung Hee; Yoon, Seong Hoon; Yeo, Hye Ju; Kim, Dong Wan; Lee, Seung Eun; Cho, Woo Hyun; Lee, Su Jin; Kim, Yun Seong; Jeon, Doosoo

    2015-01-01

    This study was conducted to evaluate the impact of implementation of an automated liquid culture system on the diagnosis of tuberculous pleurisy in an HIV-uninfected patient population. We retrospectively compared the culture yield, time to positivity, and contamination rate of pleural effusion samples in the BACTEC Mycobacteria Growth Indicator Tube 960 (MGIT) and Ogawa media among patients with tuberculous pleurisy. Out of 104 effusion samples, 43 (41.3%) were culture positive on either the...

  16. "Parallel Leadership in an "Unparallel" World"--Cultural Constraints on the Transferability of Western Educational Leadership Theories across Cultures

    Science.gov (United States)

    Goh, Jonathan Wee Pin

    2009-01-01

    With the global economy becoming more integrated, the issues of cross-cultural relevance and transferability of leadership theories and practices have become increasingly urgent. Drawing upon the concept of parallel leadership in schools proposed by Crowther, Kaagan, Ferguson, and Hann as an example, the purpose of this paper is to examine the…

  17. Automated motion correction using parallel-strip registration for wide-field en face OCT angiogram

    Science.gov (United States)

    Zang, Pengxiao; Liu, Gangjun; Zhang, Miao; Dongye, Changlei; Wang, Jie; Pechauer, Alex D.; Hwang, Thomas S.; Wilson, David J.; Huang, David; Li, Dengwang

    2016-01-01

    We propose an innovative registration method to correct motion artifacts for wide-field optical coherence tomography angiography (OCTA) acquired by ultrahigh-speed swept-source OCT (>200 kHz A-scan rate). Considering that the number of A-scans along the fast axis is much higher than the number of positions along slow axis in the wide-field OCTA scan, a non-orthogonal scheme is introduced. Two en face angiograms in the vertical priority (2 y-fast) are divided into microsaccade-free parallel strips. A gross registration based on large vessels and a fine registration based on small vessels are sequentially applied to register parallel strips into a composite image. This technique is extended to automatically montage individual registered, motion-free angiograms into an ultrawide-field view. PMID:27446709

  18. Development of an automated mid-scale parallel protein purification system for antibody purification and affinity chromatography.

    Science.gov (United States)

    Zhang, Chi; Long, Alexander M; Swalm, Brooke; Charest, Ken; Wang, Yan; Hu, Jiali; Schulz, Craig; Goetzinger, Wolfgang; Hall, Brian E

    2016-12-01

    Protein purification is often a bottleneck during protein generation for large molecule drug discovery. Therapeutic antibody campaigns typically require the purification of hundreds of monoclonal antibodies (mAbs) during the hybridoma process and lead optimization. With the increase in high-throughput cloning, faster DNA sequencing, and the use of parallel protein expression systems, a need for high-throughput purification approaches has evolved, particularly in the midsize range between 20 ml and 100 ml. To address this we modified a four channel Gilson solid phase extraction system (referred to as MG-SPE) with switching valves and sample holding loops to be able to perform standard affinity purification using commercially available columns and micro-titer format deep well blocks. By running 4 samples in parallel, the MG-SPE has the capacity to purify up to 24 samples of greater than 50 ml each using a single-step affinity purification protocol or a two-step protocol consisting of affinity chromatography followed by desalting/buffer exchange overnight (∼12 h run time). Our evaluation of affinity purification using mAbs and Fc-fusion proteins from mammalian cell supernatants demonstrates that the MG-SPE compared favorably with industry standard systems for both protein quality and yield. Overall the system is simple to operate and fills a void in purification processes where a simple, efficient, automated system is needed for affinity purification of midsize research samples.

  19. An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features

    Science.gov (United States)

    LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.

  20. Swab culture monitoring of automated endoscope reprocessors after high-level disinfection

    Institute of Scientific and Technical Information of China (English)

    Lung-Sheng Lu; Keng-Liang Wu; Yi-Chun Chiu; Ming-Tzung Lin; Tsung-Hui Hu; King-Wah Chiu

    2012-01-01

    AIM:To conduct a bacterial culture study for monitoring decontamination of automated endoscope reprocessors (AERs) after high-level disinfection (HLD).METHODS:From February 2006 to January 2011,authors conducted randomized consecutive sampling each month for 7 AERs.Authors collected a total of 420 swab cultures,including 300 cultures from 5 gastroscope AERs,and 120 cultures from 2 colonoscope AERs.Swab cultures were obtained from the residual water from the AERs after a full reprocessing cycle.Samples were cultured to test for aerobic bacteria,anaerobic bacteria,and mycobacterium tuberculosis.RESULTS:The positive culture rate of the AERs was 2.0% (6/300) for gastroscope AERs and 0.8% (1/120)for colonoscope AERs.All the positive cultures,including 6 from gastroscope and 1 from colonoscope AERs,showed monofloral colonization.Of the gastroscope AER samples,50% (3/6) were colonized by aerobic bacterial and 50% (3/6) by fungal contaminations.CONCLUSION:A full reprocessing cycle of an AER with HLD is adequate for disinfection of the machine.Swab culture is a useful method for monitoring AER decontamination after each reprocessing cycle.Fungal contamination of AERs after reprocessing should also be kept in mind.

  1. Influence of Cultural, Organizational, and Automation Capability on Human Automation Trust: A Case Study of Auto-GCAS Experimental Test Pilots

    Science.gov (United States)

    Koltai, Kolina; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Cacanindin, Artemio; Johnson, Walter; Lyons, Joseph

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Force's newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the case's politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerability/ high risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  2. Automated Static Culture System Cell Module Mixing Protocol and Computational Fluid Dynamics Analysis

    Science.gov (United States)

    Kleis, Stanley J.; Truong, Tuan; Goodwin, Thomas J,

    2004-01-01

    This report is a documentation of a fluid dynamic analysis of the proposed Automated Static Culture System (ASCS) cell module mixing protocol. The report consists of a review of some basic fluid dynamics principles appropriate for the mixing of a patch of high oxygen content media into the surrounding media which is initially depleted of oxygen, followed by a computational fluid dynamics (CFD) study of this process for the proposed protocol over a range of the governing parameters. The time histories of oxygen concentration distributions and mechanical shear levels generated are used to characterize the mixing process for different parameter values.

  3. Parallel expression of synaptophysin and evoked neurotransmitter release during development of cultured neurons

    DEFF Research Database (Denmark)

    Ehrhart-Bornstein, M; Treiman, M; Hansen, Gert Helge;

    1991-01-01

    and neurotransmitter release were measured in each of the culture types as a function of development for up to 8 days in vitro, using the same batch of cells for both sets of measurements to obtain optimal comparisons. The content and the distribution of synaptophysin in the developing cells were assessed...... by quantitative immunoblotting and light microscope immunocytochemistry, respectively. In both cell types, a close parallelism was found between the temporal pattern of development in synaptophysin expression and neurotransmitter release. This temporal pattern differed between the two types of neurons....... The cerebral cortex neurons showed a biphasic time course of increase in synaptophysin content, paralleled by a biphasic pattern of development in their ability to release [3H]GABA in response to depolarization by glutamate or elevated K+ concentrations. In contrast, a monophasic, approximately linear increase...

  4. a Semi-Automated Point Cloud Processing Methodology for 3d Cultural Heritage Documentation

    Science.gov (United States)

    Kıvılcım, C. Ö.; Duran, Z.

    2016-06-01

    The preliminary phase in any architectural heritage project is to obtain metric measurements and documentation of the building and its individual elements. On the other hand, conventional measurement techniques require tremendous resources and lengthy project completion times for architectural surveys and 3D model production. Over the past two decades, the widespread use of laser scanning and digital photogrammetry have significantly altered the heritage documentation process. Furthermore, advances in these technologies have enabled robust data collection and reduced user workload for generating various levels of products, from single buildings to expansive cityscapes. More recently, the use of procedural modelling methods and BIM relevant applications for historic building documentation purposes has become an active area of research, however fully automated systems in cultural heritage documentation still remains open. In this paper, we present a semi-automated methodology, for 3D façade modelling of cultural heritage assets based on parametric and procedural modelling techniques and using airborne and terrestrial laser scanning data. We present the contribution of our methodology, which we implemented in an open source software environment using the example project of a 16th century early classical era Ottoman structure, Sinan the Architect's Şehzade Mosque in Istanbul, Turkey.

  5. FY1995 study of low power LSI design automation software with parallel processing; 1995 nendo heiretsu shori wo katsuyoshita shodenryoku LSI muke sekkei jidoka software no kenkyu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The needs for low power LSIs have rapidly increased recently. For the low power LSI development, not only new circuit technologies but also new design automation tools supporting the new technologies are indispensable. The purpose of this project is to develop a new design automation software, which is able to design new digital LSIs with much lower power than that of conventional CMOS LSIs. A new design automation software for very low power LSIs has been developed targeting the pass-transistor logic SPL, a dedicated low power circuit technology. The software includes a logic synthesis function for pass-transistor-based macrocells and a macrocell placement function. Several new algorithms have been developed for the software, e.g. BDD construction. Some of them are designed and implemented for parallel processing in order to reduce the processing time. The logic synthesis function was tested on a set of benchmarks and finally applied to a low power CPU design. The designed 8-bit CPU was fully compatible with Zilog Z-80. The power dissipation of the CPU was compared with that of commercial CMOS Z-80. At most 82% of power of CMOS was reduced by the new CPU. On the other hand, parallel processing speed up was measured on the macrocell placement function. 34 folds speed up was realized. (NEDO)

  6. Automated detection of soma location and morphology in neuronal network cultures.

    Directory of Open Access Journals (Sweden)

    Burcin Ozcan

    Full Text Available Automated identification of the primary components of a neuron and extraction of its sub-cellular features are essential steps in many quantitative studies of neuronal networks. The focus of this paper is the development of an algorithm for the automated detection of the location and morphology of somas in confocal images of neuronal network cultures. This problem is motivated by applications in high-content screenings (HCS, where the extraction of multiple morphological features of neurons on large data sets is required. Existing algorithms are not very efficient when applied to the analysis of confocal image stacks of neuronal cultures. In addition to the usual difficulties associated with the processing of fluorescent images, these types of stacks contain a small number of images so that only a small number of pixels are available along the z-direction and it is challenging to apply conventional 3D filters. The algorithm we present in this paper applies a number of innovative ideas from the theory of directional multiscale representations and involves the following steps: (i image segmentation based on support vector machines with specially designed multiscale filters; (ii soma extraction and separation of contiguous somas, using a combination of level set method and directional multiscale filters. We also present an approach to extract the soma's surface morphology using the 3D shearlet transform. Extensive numerical experiments show that our algorithms are computationally efficient and highly accurate in segmenting the somas and separating contiguous ones. The algorithms presented in this paper will facilitate the development of a high-throughput quantitative platform for the study of neuronal networks for HCS applications.

  7. Identifying and Quantifying Cultural Factors That Matter to the IT Workforce: An Approach Based on Automated Content Analysis

    DEFF Research Database (Denmark)

    Schmiedel, Theresa; Müller, Oliver; Debortoli, Stefan;

    2016-01-01

    Organizational culture represents a key success factor in highly competitive environments, such as, the IT sector. Thus, IT companies need to understand what makes up a culture that fosters employee performance. While existing research typically uses self-report questionnaires to study the relation...... of culture and the success of companies, the validity of this approach is often discussed and researchers call for new ways of studying culture. Therefore, our research goal is to present an alternative approach to culture analysis for examining which cultural factors matter to the IT workforce. Our study...... builds on 112,610 online reviews of Fortune 500 IT companies collected from Glassdoor, an online platform on which current and former employees can anonymously review companies and their management. We perform an automated content analysis to identify cultural factors that employees emphasize...

  8. An engineered approach to stem cell culture: automating the decision process for real-time adaptive subculture of stem cells.

    Science.gov (United States)

    Ker, Dai Fei Elmer; Weiss, Lee E; Junkers, Silvina N; Chen, Mei; Yin, Zhaozheng; Sandbothe, Michael F; Huh, Seung-il; Eom, Sungeun; Bise, Ryoma; Osuna-Highley, Elvira; Kanade, Takeo; Campbell, Phil G

    2011-01-01

    Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and objectively determine the appropriate time for cell passaging. Optimal stem cell culturing that maintains cell pluripotency while maximizing cell yields will be especially important for efficient, cost-effective stem cell-based therapies. Toward this goal we developed a real-time computer vision-based system that monitors the degree of cell confluency with a precision of 0.791±0.031 and recall of 0.559±0.043. The system consists of an automated phase-contrast time-lapse microscope and a server. Multiple dishes are sequentially imaged and the data is uploaded to the server that performs computer vision processing, predicts when cells will exceed a pre-defined threshold for optimal cell confluency, and provides a Web-based interface for remote cell culture monitoring. Human operators are also notified via text messaging and e-mail 4 hours prior to reaching this threshold and immediately upon reaching this threshold. This system was successfully used to direct the expansion of a paradigm stem cell population, C2C12 cells. Computer-directed and human-directed control subcultures required 3 serial cultures to achieve the theoretical target cell yield of 50 million C2C12 cells and showed no difference for myogenic and osteogenic differentiation. This automated vision-based system has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control, and it could be integrated with current and developing robotic cell

  9. A novel automated bioreactor for scalable process optimisation of haematopoietic stem cell culture.

    Science.gov (United States)

    Ratcliffe, E; Glen, K E; Workman, V L; Stacey, A J; Thomas, R J

    2012-10-31

    Proliferation and differentiation of haematopoietic stem cells (HSCs) from umbilical cord blood at large scale will potentially underpin production of a number of therapeutic cellular products in development, including erythrocytes and platelets. However, to achieve production processes that are scalable and optimised for cost and quality, scaled down development platforms that can define process parameter tolerances and consequent manufacturing controls are essential. We have demonstrated the potential of a new, automated, 24×15 mL replicate suspension bioreactor system, with online monitoring and control, to develop an HSC proliferation and differentiation process for erythroid committed cells (CD71(+), CD235a(+)). Cell proliferation was relatively robust to cell density and oxygen levels and reached up to 6 population doublings over 10 days. The maximum suspension culture density for a 48 h total media exchange protocol was established to be in the order of 10(7)cells/mL. This system will be valuable for the further HSC suspension culture cost reduction and optimisation necessary before the application of conventional stirred tank technology to scaled manufacture of HSC derived products.

  10. Cultural Heritage: An example of graphical documentation with automated photogrammetric systems

    Science.gov (United States)

    Giuliano, M. G.

    2014-06-01

    In the field of Cultural Heritage, the use of automated photogrammetric systems, based on Structure from Motion techniques (SfM), is widely used, in particular for the study and for the documentation of the ancient ruins. This work has been carried out during the PhD cycle that was produced the "Carta Archeologica del territorio intorno al monte Massico". The study suggests the archeological documentation of the mausoleum "Torre del Ballerino" placed in the south-west area of Falciano del Massico, along the Via Appia. The graphic documentation has been achieved by using photogrammetric system (Image Based Modeling) and by the classical survey with total station, Nikon Nivo C. The data acquisition was carried out through digital camera Canon EOS 5D Mark II with Canon EF 17-40 mm f/4L USM @ 20 mm with images snapped in RAW and corrected in Adobe Lightroom. During the data processing, the camera calibration and orientation was carried out by the software Agisoft Photoscans and the final result has allowed to achieve a scaled 3D model of the monument, imported in software MeshLab for the different view. Three orthophotos in jpg format were extracted by the model, and then were imported in AutoCAD obtaining façade's surveys.

  11. Field measurement of acid gases and soluble anions in atmospheric particulate matter using a parallel plate wet denuder and an alternating filter-based automated analysis system.

    Science.gov (United States)

    Boring, C Bradley; Al-Horr, Rida; Genfa, Zhang; Dasgupta, Pumendu K; Martin, Michael W; Smith, William F

    2002-03-15

    We present a new fully automated instrument for the measurement of acid gases and soluble anionic constituents of atmospheric particulate matter. The instrument operates in two independent parallel channels. In one channel, a wet denuder collects soluble acid gases; these are analyzed by anion chromatography (IC). In a second channel, a cyclone removes large particles and the aerosol stream is then processed by another wet denuder to remove potentially interfering gases. The particles are then collected by one of two glass fiber filters which are alternately sampled, washed, and dried. The washings are preconcentrated and analyzed by IC. Detection limits of low to subnanogram per cubic meter concentrations of most gaseous and particulate constituents can be readily attained. The instrument has been extensively field-tested; some field data are presented. Results of attempts to decipher the total anionic constitution of urban ambient aerosol by IC-MS analysis are also presented.

  12. An engineered approach to stem cell culture: automating the decision process for real-time adaptive subculture of stem cells.

    Directory of Open Access Journals (Sweden)

    Dai Fei Elmer Ker

    Full Text Available Current cell culture practices are dependent upon human operators and remain laborious and highly subjective, resulting in large variations and inconsistent outcomes, especially when using visual assessments of cell confluency to determine the appropriate time to subculture cells. Although efforts to automate cell culture with robotic systems are underway, the majority of such systems still require human intervention to determine when to subculture. Thus, it is necessary to accurately and objectively determine the appropriate time for cell passaging. Optimal stem cell culturing that maintains cell pluripotency while maximizing cell yields will be especially important for efficient, cost-effective stem cell-based therapies. Toward this goal we developed a real-time computer vision-based system that monitors the degree of cell confluency with a precision of 0.791±0.031 and recall of 0.559±0.043. The system consists of an automated phase-contrast time-lapse microscope and a server. Multiple dishes are sequentially imaged and the data is uploaded to the server that performs computer vision processing, predicts when cells will exceed a pre-defined threshold for optimal cell confluency, and provides a Web-based interface for remote cell culture monitoring. Human operators are also notified via text messaging and e-mail 4 hours prior to reaching this threshold and immediately upon reaching this threshold. This system was successfully used to direct the expansion of a paradigm stem cell population, C2C12 cells. Computer-directed and human-directed control subcultures required 3 serial cultures to achieve the theoretical target cell yield of 50 million C2C12 cells and showed no difference for myogenic and osteogenic differentiation. This automated vision-based system has potential as a tool toward adaptive real-time control of subculturing, cell culture optimization and quality assurance/quality control, and it could be integrated with current and

  13. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    Science.gov (United States)

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (Source-code for MATLAB and ImageJ is freely available under a permissive open-source license.

  14. Design and Performance of an Automated Bioreactor for Cell Culture Experiments in a Microgravity Environment

    Science.gov (United States)

    Kim, Youn-Kyu; Park, Seul-Hyun; Lee, Joo-Hee; Choi, Gi-Hyuk

    2015-03-01

    In this paper, we describe the development of a bioreactor for a cell-culture experiment on the International Space Station (ISS). The bioreactor is an experimental device for culturing mouse muscle cells in a microgravity environment. The purpose of the experiment was to assess the impact of microgravity on the muscles to address the possibility of longterm human residence in space. After investigation of previously developed bioreactors, and analysis of the requirements for microgravity cell culture experiments, a bioreactor design is herein proposed that is able to automatically culture 32 samples simultaneously. This reactor design is capable of automatic control of temperature, humidity, and culture-medium injection rate; and satisfies the interface requirements of the ISS. Since bioreactors are vulnerable to cell contamination, the medium-circulation modules were designed to be a completely replaceable, in order to reuse the bioreactor after each experiment. The bioreactor control system is designed to circulate culture media to 32 culture chambers at a maximum speed of 1 ml/min, to maintain the temperature of the reactor at 36°C, and to keep the relative humidity of the reactor above 70%. Because bubbles in the culture media negatively affect cell culture, a de-bubbler unit was provided to eliminate such bubbles. A working model of the reactor was built according to the new design, to verify its performance, and was used to perform a cell culture experiment that confirmed the feasibility of this device.

  15. Automated method for the rapid and precise estimation of adherent cell culture characteristics from phase contrast microscopy images.

    Science.gov (United States)

    Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas

    2014-03-01

    The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. PMID:24037521

  16. Repeated stimulation of cultured networks of rat cortical neurons induces parallel memory traces

    NARCIS (Netherlands)

    Feber, le J.; Witteveen, T.; Veenendaal, T.M.; Dijkstra, J.

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neur

  17. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    Science.gov (United States)

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  18. The performance of fully automated urine analysis results for predicting the need of urine culture test

    Directory of Open Access Journals (Sweden)

    Hatice Yüksel

    2014-06-01

    Full Text Available Objectives: Urinalysis and urine culture are most common tests for diagnosis of urinary tract infections. The aim of our study is to examine the diagnostic performance of urine analysis and the role of urine analysis to determine the requirements for urine culture. Methods: Urine culture and urine analysis results of 362 patients were retrospectively analyzed. Culture results were taken as a reference for chemical and microscopic examination of urine and diagnostic accuracy of the test parameters, that may be a marker for urinary tract infection, and the performance of urine analysis were calculated for predicting the urine culture requirements. Results: A total of 362 urine culture results of patients were evaluated and 67% of them were negative. The results of leukocyte esterase and nitrite in chemical analysis and leukocytes and bacteria in microscopic analysis were normal in 50.4% of culture negative urines. In diagnostic accuracy calculations, leukocyte esterase (86.1% and microscopy leukocytes (88.0% were found with high sensitivity, nitrite (95.4% and bacteria (86.6% were found with high specificity. The area under the curve was calculated as 0.852 in ROC analysis for microscopic examination for leukocytes. Conclusion: Full-automatic urine devices can provide sufficient diagnostic accuracy for urine analysis. The evaluation of urine analysis results in an effective way can predict the necessity for urine culture requests and especially may contribute to a reduction in the work load and cost. J Clin Exp Invest 2014; 5 (2: 286-289

  19. Characterization and Classification of Adherent Cells in Monolayer Culture using Automated Tracking and Evolutionary Algorithms

    OpenAIRE

    Zhang, Z.; Bedder, M; Smith, S L; Walker, D; Shabir, S.; Southgate, J

    2016-01-01

    This paper presents a novel method for tracking and characterizing adherent cells in monolayer culture. A system of cell tracking employing computer vision techniques was applied to time-lapse videos of replicate normal human uro-epithelial cell cultures exposed to different concentrations of adenosine triphosphate (ATP) and a selective purinergic P2X antagonist (PPADS), acquired over a 24hour period. Subsequent analysis following feature extraction demonstrated the ability of the technique t...

  20. Developmental downregulation of GABAergic drive parallels formation of functional synapses in cultured mouse neocortical networks.

    Science.gov (United States)

    Klueva, Julia; Meis, Susanne; de Lima, Ana D; Voigt, Thomas; Munsch, Thomas

    2008-06-01

    Networks of cortical neurons in vitro spontaneously develop synchronous oscillatory electrical activity at around the second week in culture. However, the underlying mechanisms and in particular the role of GABAergic interneurons in initiation and synchronization of oscillatory activity in developing cortical networks remain elusive. Here, we examined the intrinsic properties and the development of GABAergic and glutamatergic input onto presumed projection neurons (PNs) and large interneurons (L-INs) in cortical cultures of GAD67-GFP mice. Cultures developed spontaneous synchronous activity already at 5-7 days in vitro (DIV), as revealed by imaging transient changes in Fluo-3 fluorescence. Concurrently, spontaneous glutamate-mediated and GABA(A)-mediated postsynaptic currents (sPSCs) occured at 5 DIV. For both types of neurons the frequency of glutamatergic and GABAergic sPSCs increased with DIV, whereas the charge transfer of glutamatergic sPSCs increased and the charge transfer of GABAergic sPSCs decreased with cultivation time. The ratio between GABAergic and the overall charge transfer was significantly reduced with DIV for L-INs and PNs, indicating an overall reduction in GABAergic synaptic drive with maturation of the network. In contrast, analysis of miniature PSCs (mPSCs) revealed no significant changes of charge transfer with DIV for both types of neurons, indicating that the reduction in GABAergic drive was not due to a decreased number of functional synapses. Our data suggest that the global reduction in GABAergic synaptic drive together with more synaptic input to PNs and L-INs during maturation may enhance rhythmogenesis of the network and increase the synchronization at the level of population bursts. PMID:18361402

  1. EST2uni: an open, parallel tool for automated EST analysis and database creation, with a data mining web interface and microarray expression data integration

    Directory of Open Access Journals (Sweden)

    Nuez Fernando

    2008-01-01

    Full Text Available Abstract Background Expressed sequence tag (EST collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http

  2. Automated Voxel Model from Point Clouds for Structural Analysis of Cultural Heritage

    Science.gov (United States)

    Bitelli, G.; Castellazzi, G.; D'Altri, A. M.; De Miranda, S.; Lambertini, A.; Selvaggi, I.

    2016-06-01

    In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM) of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy) that was hit by an earthquake in 2012.

  3. Scalable Transcriptome Preparation for Massive Parallel Sequencing

    OpenAIRE

    Stranneheim, Henrik; Werne, Beata; Sherwood, Ellen; Lundeberg, Joakim

    2011-01-01

    Background The tremendous output of massive parallel sequencing technologies requires automated robust and scalable sample preparation methods to fully exploit the new sequence capacity. Methodology In this study, a method for automated library preparation of RNA prior to massively parallel sequencing is presented. The automated protocol uses precipitation onto carboxylic acid paramagnetic beads for purification and size selection of both RNA and DNA. The automated sample preparation was comp...

  4. Scalable Transcriptome Preparation for Massive Parallel Sequencing

    OpenAIRE

    Henrik Stranneheim; Beata Werne; Ellen Sherwood; Joakim Lundeberg

    2011-01-01

    BACKGROUND: The tremendous output of massive parallel sequencing technologies requires automated robust and scalable sample preparation methods to fully exploit the new sequence capacity. METHODOLOGY: In this study, a method for automated library preparation of RNA prior to massively parallel sequencing is presented. The automated protocol uses precipitation onto carboxylic acid paramagnetic beads for purification and size selection of both RNA and DNA. The automated sample preparation was co...

  5. Adaptação cultural e confiabilidade para o Brasil do Automated Telephone Disease Management: resultados preliminares Adaptación cultural y confiabilidad para el Brasil del Automated Telephone Disease Management: resultados preliminares Cultural adaptation and reliability for Brazil of the Automated Telephone Disease Management: Preliminary results

    Directory of Open Access Journals (Sweden)

    Talita Balaminut

    2012-01-01

    Full Text Available OBJETIVOS: Traduzir, adaptar culturalmente para o Brasil o ATDM Satisfaction Scales e avaliar a confiabilidade da versão adaptada em adultos brasileiros com DM. MÉTODOS: Estudo metodológico, cujo processo de adaptação cultural incluiu: tradução, comitê de juízes, retrotradução, análise semântica e pré-teste. Este estudo incluiu uma amostra de 39 adultos brasileiros com DM cadastrados em um programa educativo do interior paulista. RESULTADOS: A versão adaptada do instrumento mostrou boa aceitação com fácil compreensão dos itens pelos participantes, com confiabilidade variando entre 0,30 e 0,43. CONCLUSÃO: Após a análise das propriedades psicométricas e finalização do processo de validação no País, o instrumento poderá ser utilizado por pesquisadores brasileiros, possibilitando ser comparado com outras culturas.OBJETIVOS: Traducir, adaptar culturalmente para el Brasil el ATDM Satisfaction Scales y evaluar la confiabilidad de la versión adaptada en adultos brasileros con DM. MÉTODOS: Estudio metodológico, cuyo proceso de adaptación cultural incluyó: traducción, comité de jueces, retrotraducción, análisis semántica y pre-test.Este estudio incluyó una muestra de 39 adultos brasileros con DM registrados en un programa educativo del interior paulista. RESULTADOS: La versión adaptada del instrumento mostró buena aceptación con fácil comprensión de los items por los participantes, con confiabilidad variando entre 0,30 y 0,43. CONCLUSIÓN: Después del análisis de las propiedades psicométricas y finalización del proceso de validación en el País, el instrumento podrá ser utilizado por investigadores brasileros, posibilitando su comparación con otras culturas.OBJECTIVES: To translate, culturally adapt for Brazil the Automated Telephone Disease Management (ATDM Satisfaction Scales and evaluate the reliability of the adapted version in Brazilian adults with diabetes mellitus (DM. METHODS: A methodological

  6. Tensor contraction engine: Abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories

    International Nuclear Information System (INIS)

    We develop a symbolic manipulation program and program generator (Tensor Contraction Engine or TCE) that automatically derives the working equations of a well-defined model of second-quantized many-electron theories and synthesizes efficient parallel computer programs on the basis of these equations. Provided an ansatz of a many-electron theory model, TCE performs valid contractions of creation and annihilation operators according to Wick's theorem, consolidates identical terms, and reduces the expressions into the form of multiple tensor contractions acted by permutation operators. Subsequently, it determines the binary contraction order for each multiple tensor contraction with the minimal operation and memory cost, factorizes common binary contractions (defines intermediate tensors), and identifies reusable intermediates. The resulting ordered list of binary tensor contractions, additions, and index permutations is translated into an optimized program that is combined with the NWChem and UTChem computational chemistry software packages. The programs synthesized by TCE take advantage of spin symmetry, Abelian point-group symmetry, and index permutation symmetry at every stage of calculations to minimize the number of arithmetic operations and storage requirement, adjust the peak local memory usage by index range tiling, and support parallel I/O interfaces and dynamic load balancing for parallel executions. We demonstrate the utility of TCE through automatic derivation and implementation of parallel programs for various models of configuration-interaction theory (CISD, CISDT, CISDTQ), many-body perturbation theory[MBPT(2), MBPT(3), MBPT(4)], and coupled-cluster theory (LCCD, CCD, LCCSD, CCSD, QCISD, CCSDT, and CCSDTQ)

  7. Organizational changes and automation: By means of a customer-oriented policy the so-called 'island culture' disappears: Part 2

    International Nuclear Information System (INIS)

    Automation offers great opportunities in the efforts of energy utilities in the Netherlands to reorganize towards more customer-oriented businesses. However, automation in itself is not enough. First, the organizational structure has to be changed considerably. Various energy utilities have already started on it. The restructuring principle is the same everywhere, but the way it is implemented differs widely. In this article attention is paid to different customer information systems. These systems can put an end to the so-called island culture within the energy utility organizations. The systems discussed are IRD of Systema and RIVA of SAP (both German software businesses), and two Dutch systems: Numis-2000 of Multihouse and KIS/400 of NUON Info-Systemen

  8. Automated vector selection of SIVQ and parallel computing integration MATLAB TM : Innovations supporting large-scale and high-throughput image analysis studies

    Directory of Open Access Journals (Sweden)

    Jerome Cheng

    2011-01-01

    Full Text Available Introduction: Spatially invariant vector quantization (SIVQ is a texture and color-based image matching algorithm that queries the image space through the use of ring vectors. In prior studies, the selection of one or more optimal vectors for a particular feature of interest required a manual process, with the user initially stochastically selecting candidate vectors and subsequently testing them upon other regions of the image to verify the vector′s sensitivity and specificity properties (typically by reviewing a resultant heat map. In carrying out the prior efforts, the SIVQ algorithm was noted to exhibit highly scalable computational properties, where each region of analysis can take place independently of others, making a compelling case for the exploration of its deployment on high-throughput computing platforms, with the hypothesis that such an exercise will result in performance gains that scale linearly with increasing processor count. Methods: An automated process was developed for the selection of optimal ring vectors to serve as the predicate matching operator in defining histopathological features of interest. Briefly, candidate vectors were generated from every possible coordinate origin within a user-defined vector selection area (VSA and subsequently compared against user-identified positive and negative "ground truth" regions on the same image. Each vector from the VSA was assessed for its goodness-of-fit to both the positive and negative areas via the use of the receiver operating characteristic (ROC transfer function, with each assessment resulting in an associated area-under-the-curve (AUC figure of merit. Results: Use of the above-mentioned automated vector selection process was demonstrated in two cases of use: First, to identify malignant colonic epithelium, and second, to identify soft tissue sarcoma. For both examples, a very satisfactory optimized vector was identified, as defined by the AUC metric. Finally, as an

  9. Reductions in self-reported stress and anticipatory heart rate with the use of a semi-automated parallel parking system.

    Science.gov (United States)

    Reimer, Bryan; Mehler, Bruce; Coughlin, Joseph F

    2016-01-01

    Drivers' reactions to a semi-autonomous technology for assisted parallel parking system were evaluated in a field experiment. A sample of 42 drivers balanced by gender and across three age groups (20-29, 40-49, 60-69) were given a comprehensive briefing, saw the technology demonstrated, practiced parallel parking 3 times each with and without the assistive technology, and then were assessed on an additional 3 parking events each with and without the technology. Anticipatory stress, as measured by heart rate, was significantly lower when drivers approached a parking space knowing that they would be using the assistive technology as opposed to manually parking. Self-reported stress levels following assisted parks were also lower. Thus, both subjective and objective data support the position that the assistive technology reduced stress levels in drivers who were given detailed training. It was observed that drivers decreased their use of turn signals when using the semi-autonomous technology, raising a caution concerning unintended lapses in safe driving behaviors that may occur when assistive technologies are used.

  10. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data.

    Science.gov (United States)

    Katta, Mohan A V S K; Khan, Aamir W; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data.

  11. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina Data.

    Directory of Open Access Journals (Sweden)

    Mohan A V S K Katta

    Full Text Available Rapid popularity and adaptation of next generation sequencing (NGS approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1 (http://htslib.org, for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2, SNP calling (SAMtools and other utilities (bedtools towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina data.

  12. NGS-QCbox and Raspberry for Parallel, Automated and Rapid Quality Control Analysis of Large-Scale Next Generation Sequencing (Illumina) Data.

    Science.gov (United States)

    Katta, Mohan A V S K; Khan, Aamir W; Doddamani, Dadakhalandar; Thudi, Mahendar; Varshney, Rajeev K

    2015-01-01

    Rapid popularity and adaptation of next generation sequencing (NGS) approaches have generated huge volumes of data. High throughput platforms like Illumina HiSeq produce terabytes of raw data that requires quick processing. Quality control of the data is an important component prior to the downstream analyses. To address these issues, we have developed a quality control pipeline, NGS-QCbox that scales up to process hundreds or thousands of samples. Raspberry is an in-house tool, developed in C language utilizing HTSlib (v1.2.1) (http://htslib.org), for computing read/base level statistics. It can be used as stand-alone application and can process both compressed and uncompressed FASTQ format files. NGS-QCbox integrates Raspberry with other open-source tools for alignment (Bowtie2), SNP calling (SAMtools) and other utilities (bedtools) towards analyzing raw NGS data at higher efficiency and in high-throughput manner. The pipeline implements batch processing of jobs using Bpipe (https://github.com/ssadedin/bpipe) in parallel and internally, a fine grained task parallelization utilizing OpenMP. It reports read and base statistics along with genome coverage and variants in a user friendly format. The pipeline developed presents a simple menu driven interface and can be used in either quick or complete mode. In addition, the pipeline in quick mode outperforms in speed against other similar existing QC pipeline/tools. The NGS-QCbox pipeline, Raspberry tool and associated scripts are made available at the URL https://github.com/CEG-ICRISAT/NGS-QCbox and https://github.com/CEG-ICRISAT/Raspberry for rapid quality control analysis of large-scale next generation sequencing (Illumina) data. PMID:26460497

  13. Toward fully automated high performance computing drug discovery: a massively parallel virtual screening pipeline for docking and molecular mechanics/generalized Born surface area rescoring to improve enrichment.

    Science.gov (United States)

    Zhang, Xiaohua; Wong, Sergio E; Lightstone, Felice C

    2014-01-27

    In this work we announce and evaluate a high throughput virtual screening pipeline for in-silico screening of virtual compound databases using high performance computing (HPC). Notable features of this pipeline are an automated receptor preparation scheme with unsupervised binding site identification. The pipeline includes receptor/target preparation, ligand preparation, VinaLC docking calculation, and molecular mechanics/generalized Born surface area (MM/GBSA) rescoring using the GB model by Onufriev and co-workers [J. Chem. Theory Comput. 2007, 3, 156-169]. Furthermore, we leverage HPC resources to perform an unprecedented, comprehensive evaluation of MM/GBSA rescoring when applied to the DUD-E data set (Directory of Useful Decoys: Enhanced), in which we selected 38 protein targets and a total of ∼0.7 million actives and decoys. The computer wall time for virtual screening has been reduced drastically on HPC machines, which increases the feasibility of extremely large ligand database screening with more accurate methods. HPC resources allowed us to rescore 20 poses per compound and evaluate the optimal number of poses to rescore. We find that keeping 5-10 poses is a good compromise between accuracy and computational expense. Overall the results demonstrate that MM/GBSA rescoring has higher average receiver operating characteristic (ROC) area under curve (AUC) values and consistently better early recovery of actives than Vina docking alone. Specifically, the enrichment performance is target-dependent. MM/GBSA rescoring significantly out performs Vina docking for the folate enzymes, kinases, and several other enzymes. The more accurate energy function and solvation terms of the MM/GBSA method allow MM/GBSA to achieve better enrichment, but the rescoring is still limited by the docking method to generate the poses with the correct binding modes.

  14. 南朝骈文隶事及其深层文化意蕴%The Allusions in Parallel Prose in Southern Dynasties and Their Deep Cultural Implications

    Institute of Scientific and Technical Information of China (English)

    刘涛

    2015-01-01

    作为南朝骈文形式方面的一大重要特征,隶事与对偶、辞藻、声律、句式等共同成就了骈文的形式之美。南朝骈文隶事不仅数量多,而且方式也非常灵活。齐梁以前,隶事总数相对较少,技巧也较为粗浅。齐梁以后,隶事数量大增,技巧也由粗浅走向精湛。南朝骈文隶事既可充实文章内容,提高艺术表现力,又可装饰文章形式,增强审美性,在骈文追求形式美的过程中起到至关重要的作用。南朝骈文隶事有其深层次的文化动因,除尊经崇古的心态使然外,还受重形式的文学审美取向与重博学的社会文化风气的影响。%As one of the main features in the form of parallel prose in Southern Dynasties,the allusions and antithesis,flow-ery languages,rhythm,sentence patterns altogether contributed to the form beauty of parallel prose.Large number of allusions and flexible manners existed in the articles.Prior to the Qi,Liang Dynasties,the total number of allusions was relatively small and the skill was rough.After the Qi,Liang Dynasties,the number of allusions largely increased and the skill also changed from the rough to the exquisite.The allusions in parallel prose in Southern Dynasties not only could enrich the content of the article, improve the expressive force,but also could beautify the article form,enhance the aesthetic effect.The allusions played an im-portant role in the process of pursuing for form beauty in parallel prose.There were deep cultural reasons in allusions in parallel prose in Southern Dynasties.In addition to worshiping Confucian academic and ancient things,there were much influence coming from the literary aesthetic orientation where attaching great importance to the form and the social and cultural atmos-phere where paying attention to the learned.

  15. A comprehensive and precise quantification of the calanoid copepod Acartia tonsa (Dana) for intensive live feed cultures using an automated ZooImage system

    DEFF Research Database (Denmark)

    Vu, Minh Thi Thuy; Jepsen, Per Meyer; Hansen, Benni Winding

    2014-01-01

    ignored. In this study, we propose a novel method for highly precise classification of development stages and biomass of A. tonsa, in intensive live feed cultures, using an automated ZooImage system, a freeware image analysis. We successfully created a training set of 13 categories, including 7 copepod...... and 6 non-copepod (debris) groups. ZooImage used this training set for automatic discrimination through a random forest algorithm with the general accuracy of 92.8%. The ZooImage showed no significant difference in classifying solitary eggs, or mixed nauplii stages and copepodites compared to personal...... microscope observation. Furthermore, ZooImage was also adapted for automatic estimation of A. tonsa biomass. This is the first study that has successfully applied ZooImage software which enables fast and reliable quantification of the development stages and the biomass of A. tonsa. As a result, relevant...

  16. Evaluation of an automated rapid diagnostic assay for detection of Gram-negative bacteria and their drug-resistance genes in positive blood cultures.

    Directory of Open Access Journals (Sweden)

    Masayoshi Tojo

    Full Text Available We evaluated the performance of the Verigene Gram-Negative Blood Culture Nucleic Acid Test (BC-GN; Nanosphere, Northbrook, IL, USA, an automated multiplex assay for rapid identification of positive blood cultures caused by 9 Gram-negative bacteria (GNB and for detection of 9 genes associated with β-lactam resistance. The BC-GN assay can be performed directly from positive blood cultures with 5 minutes of hands-on and 2 hours of run time per sample. A total of 397 GNB positive blood cultures were analyzed using the BC-GN assay. Of the 397 samples, 295 were simulated samples prepared by inoculating GNB into blood culture bottles, and the remaining were clinical samples from 102 patients with positive blood cultures. Aliquots of the positive blood cultures were tested by the BC-GN assay. The results of bacterial identification between the BC-GN assay and standard laboratory methods were as follows: Acinetobacter spp. (39 isolates for the BC-GN assay/39 for the standard methods, Citrobacter spp. (7/7, Escherichia coli (87/87, Klebsiella oxytoca (13/13, and Proteus spp. (11/11; Enterobacter spp. (29/30; Klebsiella pneumoniae (62/72; Pseudomonas aeruginosa (124/125; and Serratia marcescens (18/21; respectively. From the 102 clinical samples, 104 bacterial species were identified with the BC-GN assay, whereas 110 were identified with the standard methods. The BC-GN assay also detected all β-lactam resistance genes tested (233 genes, including 54 bla(CTX-M, 119 bla(IMP, 8 bla(KPC, 16 bla(NDM, 24 bla(OXA-23, 1 bla(OXA-24/40, 1 bla(OXA-48, 4 bla(OXA-58, and 6 blaVIM. The data shows that the BC-GN assay provides rapid detection of GNB and β-lactam resistance genes in positive blood cultures and has the potential to contributing to optimal patient management by earlier detection of major antimicrobial resistance genes.

  17. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  18. 基于并行拣选的自动拣选系统品项拆分优化%SKU splitting simulation for automated picking system based on parallel picking strategy

    Institute of Scientific and Technical Information of China (English)

    吴颖颖; 吴耀华

    2012-01-01

    为提高自动拣选系统的工作效率,建立了基于并行拣选的品项拆分模型。该模型的优化目标是使每个拣选机可在订单合流前完成对货物的缓存。模型采用延迟因子表示订单合流时间与货物缓存时间的差值,并通过证明得出通道延迟因子与延迟时间具有相同变化趋势的结论。为求解模型,设计了基于延迟因子的启发式拆分算法。仿真结果显示,采用启发式算法可使拣选时间缩短8.55%~11.7%。%To improve the efficiency of the automated picking system,the Stock Keeping Unit(SKU) splitting model was built based on the parallel picking.The dispensers which could finish goods buffer before order form merge were optimized.The delay factor was proposed to represent the difference between merging time and dispensing time.It was proved the delay time and the delay time of each channel had the same variation trend.To solve the model,heuristic SKU splitting algorithm based on delay factor was designed.The simulation result indicated that the picking time could be shortened by 8.55%~11.7%.

  19. Rapid detection of significant bacteriuria by use of an automated Limulus amoebocyte lysate assay.

    OpenAIRE

    Jorgensen, J H; Alexander, G A

    1982-01-01

    Previous studies have demonstrated that significant gram-negative bacteriuria can be detected by using the Limulus amoebocyte lysate test. A series of 580 urine specimens were tested in parallel with the automated MS-2 (Abbott Laboratories) assay and with quantitative urine bacterial cultures. The overall ability of the MS-2 Limulus amoebocyte lysate test to correctly classify urine specimens as containing either greater than or equal to 10(5) organisms or less than 10(5) organisms per ml dur...

  20. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  1. Library Automation

    OpenAIRE

    Dhakne, B. N.; Giri, V. V; Waghmode, S. S.

    2010-01-01

    New technologies library provides several new materials, media and mode of storing and communicating the information. Library Automation reduces the drudgery of repeated manual efforts in library routine. By use of library automation collection, Storage, Administration, Processing, Preservation and communication etc.

  2. Process automation

    International Nuclear Information System (INIS)

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  3. Computer automation and artificial intelligence

    International Nuclear Information System (INIS)

    Rapid advances in computing, resulting from micro chip revolution has increased its application manifold particularly for computer automation. Yet the level of automation available, has limited its application to more complex and dynamic systems which require an intelligent computer control. In this paper a review of Artificial intelligence techniques used to augment automation is presented. The current sequential processing approach usually adopted in artificial intelligence has succeeded in emulating the symbolic processing part of intelligence, but the processing power required to get more elusive aspects of intelligence leads towards parallel processing. An overview of parallel processing with emphasis on transputer is also provided. A Fuzzy knowledge based controller for amination drug delivery in muscle relaxant anesthesia on transputer is described. 4 figs. (author)

  4. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  5. Automation of antimicrobial activity screening.

    Science.gov (United States)

    Forry, Samuel P; Madonna, Megan C; López-Pérez, Daneli; Lin, Nancy J; Pasco, Madeleine D

    2016-03-01

    Manual and automated methods were compared for routine screening of compounds for antimicrobial activity. Automation generally accelerated assays and required less user intervention while producing comparable results. Automated protocols were validated for planktonic, biofilm, and agar cultures of the oral microbe Streptococcus mutans that is commonly associated with tooth decay. Toxicity assays for the known antimicrobial compound cetylpyridinium chloride (CPC) were validated against planktonic, biofilm forming, and 24 h biofilm culture conditions, and several commonly reported toxicity/antimicrobial activity measures were evaluated: the 50 % inhibitory concentration (IC50), the minimum inhibitory concentration (MIC), and the minimum bactericidal concentration (MBC). Using automated methods, three halide salts of cetylpyridinium (CPC, CPB, CPI) were rapidly screened with no detectable effect of the counter ion on antimicrobial activity. PMID:26970766

  6. Investigating the feasibility of scale up and automation of human induced pluripotent stem cells cultured in aggregates in feeder free conditions.

    Science.gov (United States)

    Soares, Filipa A C; Chandra, Amit; Thomas, Robert J; Pedersen, Roger A; Vallier, Ludovic; Williams, David J

    2014-03-10

    The transfer of a laboratory process into a manufacturing facility is one of the most critical steps required for the large scale production of cell-based therapy products. This study describes the first published protocol for scalable automated expansion of human induced pluripotent stem cell lines growing in aggregates in feeder-free and chemically defined medium. Cells were successfully transferred between different sites representative of research and manufacturing settings; and passaged manually and using the CompacT SelecT automation platform. Modified protocols were developed for the automated system and the management of cells aggregates (clumps) was identified as the critical step. Cellular morphology, pluripotency gene expression and differentiation into the three germ layers have been used compare the outcomes of manual and automated processes.

  7. PARALLEL STABILIZATION

    Institute of Scientific and Technical Information of China (English)

    J.L.LIONS

    1999-01-01

    A new algorithm for the stabilization of (possibly turbulent, chaotic) distributed systems, governed by linear or non linear systems of equations is presented. The SPA (Stabilization Parallel Algorithm) is based on a systematic parallel decomposition of the problem (related to arbitrarily overlapping decomposition of domains) and on a penalty argument. SPA is presented here for the case of linear parabolic equations: with distrjbuted or boundary control. It extends to practically all linear and non linear evolution equations, as it will be presented in several other publications.

  8. An automated HIV-1 Env-pseudotyped virus production for global HIV vaccine trials.

    Directory of Open Access Journals (Sweden)

    Anke Schultz

    Full Text Available BACKGROUND: Infections with HIV still represent a major human health problem worldwide and a vaccine is the only long-term option to fight efficiently against this virus. Standardized assessments of HIV-specific immune responses in vaccine trials are essential for prioritizing vaccine candidates in preclinical and clinical stages of development. With respect to neutralizing antibodies, assays with HIV-1 Env-pseudotyped viruses are a high priority. To cover the increasing demands of HIV pseudoviruses, a complete cell culture and transfection automation system has been developed. METHODOLOGY/PRINCIPAL FINDINGS: The automation system for HIV pseudovirus production comprises a modified Tecan-based Cellerity system. It covers an area of 5×3 meters and includes a robot platform, a cell counting machine, a CO(2 incubator for cell cultivation and a media refrigerator. The processes for cell handling, transfection and pseudovirus production have been implemented according to manual standard operating procedures and are controlled and scheduled autonomously by the system. The system is housed in a biosafety level II cabinet that guarantees protection of personnel, environment and the product. HIV pseudovirus stocks in a scale from 140 ml to 1000 ml have been produced on the automated system. Parallel manual production of HIV pseudoviruses and comparisons (bridging assays confirmed that the automated produced pseudoviruses were of equivalent quality as those produced manually. In addition, the automated method was fully validated according to Good Clinical Laboratory Practice (GCLP guidelines, including the validation parameters accuracy, precision, robustness and specificity. CONCLUSIONS: An automated HIV pseudovirus production system has been successfully established. It allows the high quality production of HIV pseudoviruses under GCLP conditions. In its present form, the installed module enables the production of 1000 ml of virus-containing cell

  9. Automating Finance

    Science.gov (United States)

    Moore, John

    2007-01-01

    In past years, higher education's financial management side has been riddled with manual processes and aging mainframe applications. This article discusses schools which had taken advantage of an array of technologies that automate billing, payment processing, and refund processing in the case of overpayment. The investments are well worth it:…

  10. cultural

    Directory of Open Access Journals (Sweden)

    Irene Kreutz

    2006-01-01

    Full Text Available Es un estudio cualitativo que adoptó como referencial teorico-motodológico la antropología y la etnografía. Presenta las experiencias vivenciadas por mujeres de una comunidad en el proceso salud-enfermedad, con el objetivo de comprender los determinantes sócio-culturales e históricos de las prácticas de prevención y tratamiento adoptados por el grupo cultural por medio de la entrevista semi-estructurada. Los temas que emergieron fueron: la relación entre la alimentación y lo proceso salud-enfermedad, las relaciones con el sistema de salud oficial y el proceso salud-enfermedad y lo sobrenatural. Los dados revelaron que los moradores de la comunidad investigada tienen un modo particular de explicar sus procedimientos terapéuticos. Consideramos que es papel de los profesionales de la salud en sus prácticas, la adopción de abordajes o enfoques que consideren al individuo en su dimensión sócio-cultural e histórica, considerando la enorme diversidad cultural en nuestro país.

  11. Parallel R

    CERN Document Server

    McCallum, Ethan

    2011-01-01

    It's tough to argue with R as a high-quality, cross-platform, open source statistical software product-unless you're in the business of crunching Big Data. This concise book introduces you to several strategies for using R to analyze large datasets. You'll learn the basics of Snow, Multicore, Parallel, and some Hadoop-related tools, including how to find them, how to use them, when they work well, and when they don't. With these packages, you can overcome R's single-threaded nature by spreading work across multiple CPUs, or offloading work to multiple machines to address R's memory barrier.

  12. Automation Security

    OpenAIRE

    Mirzoev, Dr. Timur

    2014-01-01

    Web-based Automated Process Control systems are a new type of applications that use the Internet to control industrial processes with the access to the real-time data. Supervisory control and data acquisition (SCADA) networks contain computers and applications that perform key functions in providing essential services and commodities (e.g., electricity, natural gas, gasoline, water, waste treatment, transportation) to all Americans. As such, they are part of the nation s critical infrastructu...

  13. Towards Distributed Memory Parallel Program Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  14. Computer-Aided Parallelizer and Optimizer

    Science.gov (United States)

    Jin, Haoqiang

    2011-01-01

    The Computer-Aided Parallelizer and Optimizer (CAPO) automates the insertion of compiler directives (see figure) to facilitate parallel processing on Shared Memory Parallel (SMP) machines. While CAPO currently is integrated seamlessly into CAPTools (developed at the University of Greenwich, now marketed as ParaWise), CAPO was independently developed at Ames Research Center as one of the components for the Legacy Code Modernization (LCM) project. The current version takes serial FORTRAN programs, performs interprocedural data dependence analysis, and generates OpenMP directives. Due to the widely supported OpenMP standard, the generated OpenMP codes have the potential to run on a wide range of SMP machines. CAPO relies on accurate interprocedural data dependence information currently provided by CAPTools. Compiler directives are generated through identification of parallel loops in the outermost level, construction of parallel regions around parallel loops and optimization of parallel regions, and insertion of directives with automatic identification of private, reduction, induction, and shared variables. Attempts also have been made to identify potential pipeline parallelism (implemented with point-to-point synchronization). Although directives are generated automatically, user interaction with the tool is still important for producing good parallel codes. A comprehensive graphical user interface is included for users to interact with the parallelization process.

  15. Automated Budget System

    Data.gov (United States)

    Department of Transportation — The Automated Budget System (ABS) automates management and planning of the Mike Monroney Aeronautical Center (MMAC) budget by providing enhanced capability to plan,...

  16. Study on Parallel Computing

    Institute of Scientific and Technical Information of China (English)

    Guo-Liang Chen; Guang-Zhong Sun; Yun-Quan Zhang; Ze-Yao Mo

    2006-01-01

    In this paper, we present a general survey on parallel computing. The main contents include parallel computer system which is the hardware platform of parallel computing, parallel algorithm which is the theoretical base of parallel computing, parallel programming which is the software support of parallel computing. After that, we also introduce some parallel applications and enabling technologies. We argue that parallel computing research should form an integrated methodology of "architecture - algorithm - programming - application". Only in this way, parallel computing research becomes continuous development and more realistic.

  17. A Performance Analysis Tool for PVM Parallel Programs

    Institute of Scientific and Technical Information of China (English)

    Chen Wang; Yin Liu; Changjun Jiang; Zhaoqing Zhang

    2004-01-01

    In this paper,we introduce the design and implementation of ParaVT,which is a visual performance analysis and parallel debugging tool.In ParaVT,we propose an automated instrumentation mechanism. Based on this mechanism,ParaVT automatically analyzes the performance bottleneck of parallel applications and provides a visual user interface to monitor and analyze the performance of parallel programs.In addition ,it also supports certain extensions.

  18. Comparison of automated BAX polymerase chain reaction and standard culture methods for detection of Listeria monocyogenes in blue crab meat (Callinectus sapidus) and blue crab processing plants

    Science.gov (United States)

    This study compared the BAX Polymerase Chain Reaction method (BAX PCR) with the Standard Culture Method (SCM) for detection of L. monocytogenes in blue crab meat and crab processing plants. The aim of this study was to address this data gap. Raw crabs, finished products and environmental sponge samp...

  19. Acquisition of data from on-line laser turbidimeter and calculation of some kinetic variables in computer-coupled automated fed-batch culture

    International Nuclear Information System (INIS)

    Output signals of a commercially available on-line laser turbidimeter exhibit fluctuations due to air and/or CO2 bubbles. A simple data processing algorithm and a personal computer software have been developed to smooth the noisy turbidity data acquired, and to utilize them for the on-line calculations of some kinetic variables involved in batch and fed-batch cultures of uniformly dispersed microorganisms. With this software, about 103 instantaneous turbidity data acquired over 55 s are averaged and convert it to dry cell concentration, X, every minute. Also, volume of the culture broth, V, is estimated from the averaged output data of weight loss of feed solution reservoir, W, using an electronic balance on which the reservoir is placed. Then, the computer software is used to perform linear regression analyses over the past 30 min of the total biomass, VX, the natural logarithm of the total biomass, ln(VX), and the weight loss, W, in order to calculate volumetric growth rate, d(VX)/dt, specific growth rate, μ [ = dln(VX)/dt] and the rate of W, dW/dt, every minute in a fed-batch culture. The software used to perform the first-order regression analyses of VX, ln(VX) and W was applied to batch or fed-batch cultures of Escherichia coli on minimum synthetic or natural complex media. Sample determination coefficients of the three different variables (VX, ln(VX) and W) were close to unity, indicating that the calculations are accurate. Furthermore, growth yield, Yx/s, and specific substrate consumption rate, qsc, were approximately estimated from the data, dW/dt and in a ‘balanced’ fed-batch culture of E. coli on the minimum synthetic medium where the computer-aided substrate-feeding system automatically matches well with the cell growth. (author)

  20. Automated extraction improves multiplex molecular detection of infection in septic patients.

    Directory of Open Access Journals (Sweden)

    Benito J Regueiro

    Full Text Available Sepsis is one of the leading causes of morbidity and mortality in hospitalized patients worldwide. Molecular technologies for rapid detection of microorganisms in patients with sepsis have only recently become available. LightCycler SeptiFast test M(grade (Roche Diagnostics GmbH is a multiplex PCR analysis able to detect DNA of the 25 most frequent pathogens in bloodstream infections. The time and labor saved while avoiding excessive laboratory manipulation is the rationale for selecting the automated MagNA Pure compact nucleic acid isolation kit-I (Roche Applied Science, GmbH as an alternative to conventional SeptiFast extraction. For the purposes of this study, we evaluate extraction in order to demonstrate the feasibility of automation. Finally, a prospective observational study was done using 106 clinical samples obtained from 76 patients in our ICU. Both extraction methods were used in parallel to test the samples. When molecular detection test results using both manual and automated extraction were compared with the data from blood cultures obtained at the same time, the results show that SeptiFast with the alternative MagNA Pure compact extraction not only shortens the complete workflow to 3.57 hrs., but also increases sensitivity of the molecular assay for detecting infection as defined by positive blood culture confirmation.

  1. Automation tools for flexible aircraft maintenance.

    Energy Technology Data Exchange (ETDEWEB)

    Prentice, William J.; Drotning, William D.; Watterberg, Peter A.; Loucks, Clifford S.; Kozlowski, David M.

    2003-11-01

    This report summarizes the accomplishments of the Laboratory Directed Research and Development (LDRD) project 26546 at Sandia, during the period FY01 through FY03. The project team visited four DoD depots that support extensive aircraft maintenance in order to understand critical needs for automation, and to identify maintenance processes for potential automation or integration opportunities. From the visits, the team identified technology needs and application issues, as well as non-technical drivers that influence the application of automation in depot maintenance of aircraft. Software tools for automation facility design analysis were developed, improved, extended, and integrated to encompass greater breadth for eventual application as a generalized design tool. The design tools for automated path planning and path generation have been enhanced to incorporate those complex robot systems with redundant joint configurations, which are likely candidate designs for a complex aircraft maintenance facility. A prototype force-controlled actively compliant end-effector was designed and developed based on a parallel kinematic mechanism design. This device was developed for demonstration of surface finishing, one of many in-contact operations performed during aircraft maintenance. This end-effector tool was positioned along the workpiece by a robot manipulator, programmed for operation by the automated planning tools integrated for this project. Together, the hardware and software tools demonstrate many of the technologies required for flexible automation in a maintenance facility.

  2. Special parallel processing workshop

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  3. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  4. Manufacturing and automation

    Directory of Open Access Journals (Sweden)

    Ernesto Córdoba Nieto

    2010-04-01

    Full Text Available The article presents concepts and definitions from different sources concerning automation. The work approaches automation by virtue of the author’s experience in manufacturing production; why and how automation prolects are embarked upon is considered. Technological reflection regarding the progressive advances or stages of automation in the production area is stressed. Coriat and Freyssenet’s thoughts about and approaches to the problem of automation and its current state are taken and examined, especially that referring to the problem’s relationship with reconciling the level of automation with the flexibility and productivity demanded by competitive, worldwide manufacturing.

  5. Parallel Programming with Intel Parallel Studio XE

    CERN Document Server

    Blair-Chappell , Stephen

    2012-01-01

    Optimize code for multi-core processors with Intel's Parallel Studio Parallel programming is rapidly becoming a "must-know" skill for developers. Yet, where to start? This teach-yourself tutorial is an ideal starting point for developers who already know Windows C and C++ and are eager to add parallelism to their code. With a focus on applying tools, techniques, and language extensions to implement parallelism, this essential resource teaches you how to write programs for multicore and leverage the power of multicore in your programs. Sharing hands-on case studies and real-world examples, the

  6. Parallelism in Constraint Programming

    OpenAIRE

    Rolf, Carl Christian

    2011-01-01

    Writing efficient parallel programs is the biggest challenge of the software industry for the foreseeable future. We are currently in a time when parallel computers are the norm, not the exception. Soon, parallel processors will be standard even in cell phones. Without drastic changes in hardware development, all software must be parallelized to its fullest extent. Parallelism can increase performance and reduce power consumption at the same time. Many programs will execute faster on a...

  7. Configuration Management Automation (CMA)

    Data.gov (United States)

    Department of Transportation — Configuration Management Automation (CMA) will provide an automated, integrated enterprise solution to support CM of FAA NAS and Non-NAS assets and investments. CMA...

  8. Workflow automation architecture standard

    Energy Technology Data Exchange (ETDEWEB)

    Moshofsky, R.P.; Rohen, W.T. [Boeing Computer Services Co., Richland, WA (United States)

    1994-11-14

    This document presents an architectural standard for application of workflow automation technology. The standard includes a functional architecture, process for developing an automated workflow system for a work group, functional and collateral specifications for workflow automation, and results of a proof of concept prototype.

  9. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  10. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  11. Computing Parallelism in Discourse

    CERN Document Server

    Gardent, C; Gardent, Claire; Kohlhase, Michael

    1997-01-01

    Although much has been said about parallelism in discourse, a formal, computational theory of parallelism structure is still outstanding. In this paper, we present a theory which given two parallel utterances predicts which are the parallel elements. The theory consists of a sorted, higher-order abductive calculus and we show that it reconciles the insights of discourse theories of parallelism with those of Higher-Order Unification approaches to discourse semantics, thereby providing a natural framework in which to capture the effect of parallelism on discourse semantics.

  12. Shoe-String Automation

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, M.L.

    2001-07-30

    Faced with a downsizing organization, serious budget reductions and retirement of key metrology personnel, maintaining capabilities to provide necessary services to our customers was becoming increasingly difficult. It appeared that the only solution was to automate some of our more personnel-intensive processes; however, it was crucial that the most personnel-intensive candidate process be automated, at the lowest price possible and with the lowest risk of failure. This discussion relates factors in the selection of the Standard Leak Calibration System for automation, the methods of automation used to provide the lowest-cost solution and the benefits realized as a result of the automation.

  13. Software Test Automation in Practice: Empirical Observations

    Directory of Open Access Journals (Sweden)

    Jussi Kasurinen

    2010-01-01

    Full Text Available The objective of this industry study is to shed light on the current situation and improvement needs in software test automation. To this end, 55 industry specialists from 31 organizational units were interviewed. In parallel with the survey, a qualitative study was conducted in 12 selected software development organizations. The results indicated that the software testing processes usually follow systematic methods to a large degree, and have only little immediate or critical requirements for resources. Based on the results, the testing processes have approximately three fourths of the resources they need, and have access to a limited, but usually sufficient, group of testing tools. As for the test automation, the situation is not as straightforward: based on our study, the applicability of test automation is still limited and its adaptation to testing contains practical difficulties in usability. In this study, we analyze and discuss these limitations and difficulties.

  14. Automated Parallel Computing Tools for Multicore Machines and Clusters Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to improve productivity of high performance computing for applications on multicore computers and clusters. These machines built from one or more chips...

  15. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  16. Parallel adaptive wavelet collocation method for PDEs

    Energy Technology Data Exchange (ETDEWEB)

    Nejadmalayeri, Alireza, E-mail: Alireza.Nejadmalayeri@gmail.com [FortiVenti Inc., Suite 404, 999 Canada Place, Vancouver, BC, V6C 3E2 (Canada); Vezolainen, Alexei, E-mail: Alexei.Vezolainen@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States); Brown-Dymkoski, Eric, E-mail: Eric.Browndymkoski@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States); Vasilyev, Oleg V., E-mail: Oleg.Vasilyev@Colorado.edu [Department of Mechanical Engineering, University of Colorado Boulder, UCB 427, Boulder, CO 80309 (United States)

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  17. Developing Parallel Programs

    Directory of Open Access Journals (Sweden)

    Ranjan Sen

    2012-09-01

    Full Text Available Parallel programming is an extension of sequential programming; today, it is becoming the mainstream paradigm in day-to-day information processing. Its aim is to build the fastest programs on parallel computers. The methodologies for developing a parallelprogram can be put into integrated frameworks. Development focuses on algorithm, languages, and how the program is deployed on the parallel computer.

  18. Invariants for Parallel Mapping

    Institute of Scientific and Technical Information of China (English)

    YIN Yajun; WU Jiye; FAN Qinshan; HUANG Kezhi

    2009-01-01

    This paper analyzes the geometric quantities that remain unchanged during parallel mapping (i.e., mapping from a reference curved surface to a parallel surface with identical normal direction). The second gradient operator, the second class of integral theorems, the Gauss-curvature-based integral theorems, and the core property of parallel mapping are used to derive a series of parallel mapping invadants or geometri-cally conserved quantities. These include not only local mapping invadants but also global mapping invari-ants found to exist both in a curved surface and along curves on the curved surface. The parallel mapping invadants are used to identify important transformations between the reference surface and parallel surfaces. These mapping invadants and transformations have potential applications in geometry, physics, biome-chanics, and mechanics in which various dynamic processes occur along or between parallel surfaces.

  19. A parallel buffer tree

    DEFF Research Database (Denmark)

    Sitchinava, Nodar; Zeh, Norbert

    2012-01-01

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...... in the optimal OhOf(psortN + K/PB) parallel I/O complexity, where K is the size of the output reported in the process and psortN is the parallel I/O complexity of sorting N elements using P processors....

  20. Automation in the clinical microbiology laboratory.

    Science.gov (United States)

    Novak, Susan M; Marlowe, Elizabeth M

    2013-09-01

    Imagine a clinical microbiology laboratory where a patient's specimens are placed on a conveyor belt and sent on an automation line for processing and plating. Technologists need only log onto a computer to visualize the images of a culture and send to a mass spectrometer for identification. Once a pathogen is identified, the system knows to send the colony for susceptibility testing. This is the future of the clinical microbiology laboratory. This article outlines the operational and staffing challenges facing clinical microbiology laboratories and the evolution of automation that is shaping the way laboratory medicine will be practiced in the future.

  1. A parallel algorithm for random searches

    Science.gov (United States)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  2. Integrating Parallelizing Compilation Technologies for SMP Clusters

    Institute of Scientific and Technical Information of China (English)

    Xiao-Bing Feng; Li Chen; Yi-Ran Wang; Xiao-Mi An; Lin Ma; Chun-Lei Sang; Zhao-Qing Zhang

    2005-01-01

    In this paper, a source to source parallelizing complier system, AutoPar, is presentd. The system transforms FORTRAN programs to multi-level hybrid MPI/OpenMP parallel programs. Integrated parallel optimizing technologies are utilized extensively to derive an effective program decomposition in the whole program scope. Other features such as synchronization optimization and communication optimization improve the performance scalability of the generated parallel programs, from both intra-node and inter-node. The system makes great effort to boost automation of parallelization.Profiling feedback is used in performance estimation which is the basis of automatic program decomposition. Performance results for eight benchmarks in NPB1.0 from NAS on an SMP cluster are given, and the speedup is desirable. It is noticeable that in the experiment, at most one data distribution directive and a reduction directive are inserted by the user in BT/SP/LU. The compiler is based on ORC, Open Research Compiler. ORC is a powerful compiler infrastructure, with such features as robustness, flexibility and efficiency. Strong analysis capability and well-defined infrastructure of ORC make the system implementation quite fast.

  3. Parallelization in Modern C++

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The traditionally used and well established parallel programming models OpenMP and MPI are both targeting lower level parallelism and are meant to be as language agnostic as possible. For a long time, those models were the only widely available portable options for developing parallel C++ applications beyond using plain threads. This has strongly limited the optimization capabilities of compilers, has inhibited extensibility and genericity, and has restricted the use of those models together with other, modern higher level abstractions introduced by the C++11 and C++14 standards. The recent revival of interest in the industry and wider community for the C++ language has also spurred a remarkable amount of standardization proposals and technical specifications being developed. Those efforts however have so far failed to build a vision on how to seamlessly integrate various types of parallelism, such as iterative parallel execution, task-based parallelism, asynchronous many-task execution flows, continuation s...

  4. Parallel digital forensics infrastructure.

    Energy Technology Data Exchange (ETDEWEB)

    Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  5. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  6. CS-Studio Scan System Parallelization

    Energy Technology Data Exchange (ETDEWEB)

    Kasemir, Kay [ORNL; Pearson, Matthew R [ORNL

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  7. Practical Parallel Rendering

    CERN Document Server

    Chalmers, Alan

    2002-01-01

    Meeting the growing demands for speed and quality in rendering computer graphics images requires new techniques. Practical parallel rendering provides one of the most practical solutions. This book addresses the basic issues of rendering within a parallel or distributed computing environment, and considers the strengths and weaknesses of multiprocessor machines and networked render farms for graphics rendering. Case studies of working applications demonstrate, in detail, practical ways of dealing with complex issues involved in parallel processing.

  8. Programming Parallel Computers

    OpenAIRE

    Chandy, K. Mani

    1988-01-01

    This paper is from a keynote address to the IEEE International Conference on Computer Languages, October 9, 1988. Keynote addresses are expected to be provocative (and perhaps even entertaining), but not necessarily scholarly. The reader should be warned that this talk was prepared with these expectations in mind.Parallel computers offer the potential of great speed at low cost. The promise of parallelism is limited by the ability to program parallel machines effectively. This paper explores ...

  9. Explicit parallel programming

    OpenAIRE

    Gamble, James Graham

    1990-01-01

    While many parallel programming languages exist, they rarely address programming languages from the issue of communication (implying expressability, and readability). A new language called Explicit Parallel Programming (EPP), attempts to provide this quality by separating the responsibility for the execution of run time actions from the responsibility for deciding the order in which they occur. The ordering of a parallel algorithm is specified in the new EPP language; run ti...

  10. Approach of generating parallel programs from parallelized algorithm design strategies

    Institute of Scientific and Technical Information of China (English)

    WAN Jian-yi; LI Xiao-ying

    2008-01-01

    Today, parallel programming is dominated by message passing libraries, such as message passing interface (MPI). This article intends to simplify parallel programming by generating parallel programs from parallelized algorithm design strategies. It uses skeletons to abstract parallelized algorithm design strategies, as well as parallel architectures. Starting from problem specification, an abstract parallel abstract programming language+ (Apla+) program is generated from parallelized algorithm design strategies and problem-specific function definitions. By combining with parallel architectures, implicity of parallelism inside the parallelized algorithm design strategies is exploited. With implementation and transformation, C++ and parallel virtual machine (CPPVM) parallel program is finally generated. Parallelized branch and bound (B&B) algorithm design strategy and parallelized divide and conquer (D & C) algorithm design strategy are studied in this article as examples. And it also illustrates the approach with a case study.

  11. Proceedings of the 1988 IEEE international conference on robotics and automation. Volume 1

    International Nuclear Information System (INIS)

    These proceedings compile the papers presented at the international conference (1988) sponsored by IEEE Council on ''Robotics and Automation''. The subjects discussed were: automation and robots of nuclear power stations; algorithms of multiprocessors; parallel processing and computer architecture; and U.S. DOE research programs on nuclear power plants

  12. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  13. Parallel Online Learning

    CERN Document Server

    Hsu, Daniel; Langford, John; Smola, Alex

    2011-01-01

    In this work we study parallelization of online learning, a core primitive in machine learning. In a parallel environment all known approaches for parallel online learning lead to delayed updates, where the model is updated using out-of-date information. In the worst case, or when examples are temporally correlated, delay can have a very adverse effect on the learning algorithm. Here, we analyze and present preliminary empirical results on a set of learning architectures based on a feature sharding approach that present various tradeoffs between delay, degree of parallelism, representation power and empirical performance.

  14. Automate functional testing

    Directory of Open Access Journals (Sweden)

    Ramesh Kalindri

    2014-06-01

    Full Text Available Currently, software engineers are increasingly turning to the option of automating functional tests, but not always have successful in this endeavor. Reasons range from low planning until over cost in the process. Some principles that can guide teams in automating these tests are described in this article.

  15. Automation in Warehouse Development

    NARCIS (Netherlands)

    Hamberg, R.; Verriet, J.

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and support

  16. Work and Programmable Automation.

    Science.gov (United States)

    DeVore, Paul W.

    A new industrial era based on electronics and the microprocessor has arrived, an era that is being called intelligent automation. Intelligent automation, in the form of robots, replaces workers, and the new products, using microelectronic devices, require significantly less labor to produce than the goods they replace. The microprocessor thus…

  17. Library Automation Style Guide.

    Science.gov (United States)

    Gaylord Bros., Liverpool, NY.

    This library automation style guide lists specific terms and names often used in the library automation industry. The terms and/or acronyms are listed alphabetically and each is followed by a brief definition. The guide refers to the "Chicago Manual of Style" for general rules, and a notes section is included for the convenience of individual…

  18. Embodied and Distributed Parallel DJing.

    Science.gov (United States)

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things. PMID:27534347

  19. Embodied and Distributed Parallel DJing.

    Science.gov (United States)

    Cappelen, Birgitta; Andersson, Anders-Petter

    2016-01-01

    Everyone has a right to take part in cultural events and activities, such as music performances and music making. Enforcing that right, within Universal Design, is often limited to a focus on physical access to public areas, hearing aids etc., or groups of persons with special needs performing in traditional ways. The latter might be people with disabilities, being musicians playing traditional instruments, or actors playing theatre. In this paper we focus on the innovative potential of including people with special needs, when creating new cultural activities. In our project RHYME our goal was to create health promoting activities for children with severe disabilities, by developing new musical and multimedia technologies. Because of the users' extreme demands and rich contribution, we ended up creating both a new genre of musical instruments and a new art form. We call this new art form Embodied and Distributed Parallel DJing, and the new genre of instruments for Empowering Multi-Sensorial Things.

  20. Automation in immunohematology.

    Science.gov (United States)

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-07-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  1. Automation in Immunohematology

    Directory of Open Access Journals (Sweden)

    Meenu Bajpai

    2012-01-01

    Full Text Available There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process.

  2. Automation in immunohematology.

    Science.gov (United States)

    Bajpai, Meenu; Kaur, Ravneet; Gupta, Ekta

    2012-07-01

    There have been rapid technological advances in blood banking in South Asian region over the past decade with an increasing emphasis on quality and safety of blood products. The conventional test tube technique has given way to newer techniques such as column agglutination technique, solid phase red cell adherence assay, and erythrocyte-magnetized technique. These new technologies are adaptable to automation and major manufacturers in this field have come up with semi and fully automated equipments for immunohematology tests in the blood bank. Automation improves the objectivity and reproducibility of tests. It reduces human errors in patient identification and transcription errors. Documentation and traceability of tests, reagents and processes and archiving of results is another major advantage of automation. Shifting from manual methods to automation is a major undertaking for any transfusion service to provide quality patient care with lesser turnaround time for their ever increasing workload. This article discusses the various issues involved in the process. PMID:22988378

  3. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  4. Advances in inspection automation

    Science.gov (United States)

    Weber, Walter H.; Mair, H. Douglas; Jansen, Dion; Lombardi, Luciano

    2013-01-01

    This new session at QNDE reflects the growing interest in inspection automation. Our paper describes a newly developed platform that makes the complex NDE automation possible without the need for software programmers. Inspection tasks that are tedious, error-prone or impossible for humans to perform can now be automated using a form of drag and drop visual scripting. Our work attempts to rectify the problem that NDE is not keeping pace with the rest of factory automation. Outside of NDE, robots routinely and autonomously machine parts, assemble components, weld structures and report progress to corporate databases. By contrast, components arriving in the NDT department typically require manual part handling, calibrations and analysis. The automation examples in this paper cover the development of robotic thickness gauging and the use of adaptive contour following on the NRU reactor inspection at Chalk River.

  5. Automated model building

    CERN Document Server

    Caferra, Ricardo; Peltier, Nicholas

    2004-01-01

    This is the first book on automated model building, a discipline of automated deduction that is of growing importance Although models and their construction are important per se, automated model building has appeared as a natural enrichment of automated deduction, especially in the attempt to capture the human way of reasoning The book provides an historical overview of the field of automated deduction, and presents the foundations of different existing approaches to model construction, in particular those developed by the authors Finite and infinite model building techniques are presented The main emphasis is on calculi-based methods, and relevant practical results are provided The book is of interest to researchers and graduate students in computer science, computational logic and artificial intelligence It can also be used as a textbook in advanced undergraduate courses

  6. Automated manufacturing of chimeric antigen receptor T cells for adoptive immunotherapy using CliniMACS prodigy.

    Science.gov (United States)

    Mock, Ulrike; Nickolay, Lauren; Philip, Brian; Cheung, Gordon Weng-Kit; Zhan, Hong; Johnston, Ian C D; Kaiser, Andrew D; Peggs, Karl; Pule, Martin; Thrasher, Adrian J; Qasim, Waseem

    2016-08-01

    Novel cell therapies derived from human T lymphocytes are exhibiting enormous potential in early-phase clinical trials in patients with hematologic malignancies. Ex vivo modification of T cells is currently limited to a small number of centers with the required infrastructure and expertise. The process requires isolation, activation, transduction, expansion and cryopreservation steps. To simplify procedures and widen applicability for clinical therapies, automation of these procedures is being developed. The CliniMACS Prodigy (Miltenyi Biotec) has recently been adapted for lentiviral transduction of T cells and here we analyse the feasibility of a clinically compliant T-cell engineering process for the manufacture of T cells encoding chimeric antigen receptors (CAR) for CD19 (CAR19), a widely targeted antigen in B-cell malignancies. Using a closed, single-use tubing set we processed mononuclear cells from fresh or frozen leukapheresis harvests collected from healthy volunteer donors. Cells were phenotyped and subjected to automated processing and activation using TransAct, a polymeric nanomatrix activation reagent incorporating CD3/CD28-specific antibodies. Cells were then transduced and expanded in the CentriCult-Unit of the tubing set, under stabilized culture conditions with automated feeding and media exchange. The process was continuously monitored to determine kinetics of expansion, transduction efficiency and phenotype of the engineered cells in comparison with small-scale transductions run in parallel. We found that transduction efficiencies, phenotype and function of CAR19 T cells were comparable with existing procedures and overall T-cell yields sufficient for anticipated therapeutic dosing. The automation of closed-system T-cell engineering should improve dissemination of emerging immunotherapies and greatly widen applicability.

  7. Automation of Hubble Space Telescope Mission Operations

    Science.gov (United States)

    Burley, Richard; Goulet, Gregory; Slater, Mark; Huey, William; Bassford, Lynn; Dunham, Larry

    2012-01-01

    On June 13, 2011, after more than 21 years, 115 thousand orbits, and nearly 1 million exposures taken, the operation of the Hubble Space Telescope successfully transitioned from 24x7x365 staffing to 815 staffing. This required the automation of routine mission operations including telemetry and forward link acquisition, data dumping and solid-state recorder management, stored command loading, and health and safety monitoring of both the observatory and the HST Ground System. These changes were driven by budget reductions, and required ground system and onboard spacecraft enhancements across the entire operations spectrum, from planning and scheduling systems to payload flight software. Changes in personnel and staffing were required in order to adapt to the new roles and responsibilities required in the new automated operations era. This paper will provide a high level overview of the obstacles to automating nominal HST mission operations, both technical and cultural, and how those obstacles were overcome.

  8. Singularities of parallel surfaces

    CERN Document Server

    Fukui, Toshizumi

    2012-01-01

    We investigate singularities of all parallel surfaces to a given regular surface. In generic context, the types of singularities of parallel surfaces are cuspidal edge, swallowtail, cuspidal lips, cuspidal beaks, cuspidal butterfly and 3-dimensional $D_4^\\pm$ singularities. We give criteria for these singularities types in terms of differential geometry (Theorem 3.4 and 3.5).

  9. Patterns For Parallel Programming

    CERN Document Server

    Mattson, Timothy G; Massingill, Berna L

    2005-01-01

    From grids and clusters to next-generation game consoles, parallel computing is going mainstream. Innovations such as Hyper-Threading Technology, HyperTransport Technology, and multicore microprocessors from IBM, Intel, and Sun are accelerating the movement's growth. Only one thing is missing: programmers with the skills to meet the soaring demand for parallel software.

  10. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  11. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  12. Automatic Performance Debugging of SPMD Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Jianfeng; Tu, Bibo; Meng, Dan

    2010-01-01

    Automatic performance debugging of parallel applications usually involves two steps: automatic detection of performance bottlenecks and uncovering their root causes for performance optimization. Previous work fails to resolve this challenging issue in several ways: first, several previous efforts automate analysis processes, but present the results in a confined way that only identifies performance problems with apriori knowledge; second, several tools take exploratory or confirmatory data analysis to automatically discover relevant performance data relationships. However, these efforts do not focus on locating performance bottlenecks or uncovering their root causes. In this paper, we design and implement an innovative system, AutoAnalyzer, to automatically debug the performance problems of single program multi-data (SPMD) parallel programs. Our system is unique in terms of two dimensions: first, without any apriori knowledge, we automatically locate bottlenecks and uncover their root causes for performance o...

  13. Detection Of Control Flow Errors In Parallel Programs At Compile Time

    Directory of Open Access Journals (Sweden)

    Bruce P. Lester

    2010-12-01

    Full Text Available This paper describes a general technique to identify control flow errors in parallel programs, which can be automated into a compiler. The compiler builds a system of linear equations that describes the global control flow of the whole program. Solving these equations using standard techniques of linear algebra can locate a wide range of control flow bugs at compile time. This paper also describes an implementation of this control flow analysis technique in a prototype compiler for a well-known parallel programming language. In contrast to previous research in automated parallel program analysis, our technique is efficient for large programs, and does not limit the range of language features.

  14. Chef infrastructure automation cookbook

    CERN Document Server

    Marschall, Matthias

    2013-01-01

    Chef Infrastructure Automation Cookbook contains practical recipes on everything you will need to automate your infrastructure using Chef. The book is packed with illustrated code examples to automate your server and cloud infrastructure.The book first shows you the simplest way to achieve a certain task. Then it explains every step in detail, so that you can build your knowledge about how things work. Eventually, the book shows you additional things to consider for each approach. That way, you can learn step-by-step and build profound knowledge on how to go about your configuration management

  15. A centralized global automation group in a decentralized organization.

    Science.gov (United States)

    Ormand, J; Bruner, J; Birkemo, L; Hinderliter-Smith, J; Veitch, J

    2000-01-01

    In the latter part of the 1990s, many companies have worked to foster a 'matrix' style culture through several changes in organizational structure. This type of culture facilitates communication and development of new technology across organizational and global boundaries. At Glaxo Wellcome, this matrix culture is reflected in an automation strategy that relies on both centralized and decentralized resources. The Group Development Operations Information Systems Robotics Team is a centralized resource providing development, support, integration, and training in laboratory automation across businesses in the Development organization. The matrix culture still presents challenges with respect to communication and managing the development of technology. A current challenge for our team is to go beyond our recognized role as a technology resource and actually to influence automation strategies across the global Development organization. We shall provide an overview of our role as a centralized resource, our team strategy, examples of current and past successes and failures, and future directions.

  16. Parallel nearest neighbor calculations

    Science.gov (United States)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  17. Logical inference techniques for loop parallelization

    KAUST Repository

    Oancea, Cosmin E.

    2012-01-01

    This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.

  18. I-94 Automation FAQs

    Data.gov (United States)

    Department of Homeland Security — In order to increase efficiency, reduce operating costs and streamline the admissions process, U.S. Customs and Border Protection has automated Form I-94 at air and...

  19. Hydrometeorological Automated Data System

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Office of Hydrologic Development of the National Weather Service operates HADS, the Hydrometeorological Automated Data System. This data set contains the last...

  20. Automated Vehicles Symposium 2015

    CERN Document Server

    Beiker, Sven

    2016-01-01

    This edited book comprises papers about the impacts, benefits and challenges of connected and automated cars. It is the third volume of the LNMOB series dealing with Road Vehicle Automation. The book comprises contributions from researchers, industry practitioners and policy makers, covering perspectives from the U.S., Europe and Japan. It is based on the Automated Vehicles Symposium 2015 which was jointly organized by the Association of Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Ann Arbor, Michigan, in July 2015. The topical spectrum includes, but is not limited to, public sector activities, human factors, ethical and business aspects, energy and technological perspectives, vehicle systems and transportation infrastructure. This book is an indispensable source of information for academic researchers, industrial engineers and policy makers interested in the topic of road vehicle automation.

  1. Automated Vehicles Symposium 2014

    CERN Document Server

    Beiker, Sven; Road Vehicle Automation 2

    2015-01-01

    This paper collection is the second volume of the LNMOB series on Road Vehicle Automation. The book contains a comprehensive review of current technical, socio-economic, and legal perspectives written by experts coming from public authorities, companies and universities in the U.S., Europe and Japan. It originates from the Automated Vehicle Symposium 2014, which was jointly organized by the Association for Unmanned Vehicle Systems International (AUVSI) and the Transportation Research Board (TRB) in Burlingame, CA, in July 2014. The contributions discuss the challenges arising from the integration of highly automated and self-driving vehicles into the transportation system, with a focus on human factors and different deployment scenarios. This book is an indispensable source of information for academic researchers, industrial engineers, and policy makers interested in the topic of road vehicle automation.

  2. Disassembly automation automated systems with cognitive abilities

    CERN Document Server

    Vongbunyong, Supachai

    2015-01-01

    This book presents a number of aspects to be considered in the development of disassembly automation, including the mechanical system, vision system and intelligent planner. The implementation of cognitive robotics increases the flexibility and degree of autonomy of the disassembly system. Disassembly, as a step in the treatment of end-of-life products, can allow the recovery of embodied value left within disposed products, as well as the appropriate separation of potentially-hazardous components. In the end-of-life treatment industry, disassembly has largely been limited to manual labor, which is expensive in developed countries. Automation is one possible solution for economic feasibility. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  3. Instant Sikuli test automation

    CERN Document Server

    Lau, Ben

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A concise guide written in an easy-to follow style using the Starter guide approach.This book is aimed at automation and testing professionals who want to use Sikuli to automate GUI. Some Python programming experience is assumed.

  4. Automated security management

    CERN Document Server

    Al-Shaer, Ehab; Xie, Geoffrey

    2013-01-01

    In this contributed volume, leading international researchers explore configuration modeling and checking, vulnerability and risk assessment, configuration analysis, and diagnostics and discovery. The authors equip readers to understand automated security management systems and techniques that increase overall network assurability and usability. These constantly changing networks defend against cyber attacks by integrating hundreds of security devices such as firewalls, IPSec gateways, IDS/IPS, authentication servers, authorization/RBAC servers, and crypto systems. Automated Security Managemen

  5. Automated Lattice Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Monahan, Christopher

    2014-11-01

    I review recent developments in automated lattice perturbation theory. Starting with an overview of lattice perturbation theory, I focus on the three automation packages currently "on the market": HiPPy/HPsrc, Pastor and PhySyCAl. I highlight some recent applications of these methods, particularly in B physics. In the final section I briefly discuss the related, but distinct, approach of numerical stochastic perturbation theory.

  6. Parallels with nature

    Science.gov (United States)

    2014-10-01

    Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

  7. Introduction to parallel computing

    CERN Document Server

    2003-01-01

    Introduction to Parallel Computing is a complete end-to-end source of information on almost all aspects of parallel computing from introduction to architectures to programming paradigms to algorithms to programming standards. It is the only book to have complete coverage of traditional Computer Science algorithms (sorting, graph and matrix algorithms), scientific computing algorithms (FFT, sparse matrix computations, N-body methods), and data intensive algorithms (search, dynamic programming, data-mining).

  8. Parallelization by Simulated Tunneling

    OpenAIRE

    Waterland, Amos; Appavoo, Jonathan; Seltzer, Margo I.

    2012-01-01

    As highly parallel heterogeneous computers become commonplace, automatic parallelization of software is an increasingly critical unsolved problem. Continued progress on this problem will require large quantities of information about the runtime structure of sequential programs to be stored and reasoned about. Manually formalizing all this information through traditional approaches, which rely on semantic analysis at the language or instruction level, has historically proved challenging. We ta...

  9. Continuous parallel coordinates.

    Science.gov (United States)

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data.

  10. Continuous parallel coordinates.

    Science.gov (United States)

    Heinrich, Julian; Weiskopf, Daniel

    2009-01-01

    Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. PMID:19834230

  11. Balanço de água por aquisição automática de dados em cultura de trigo (Triticum aestivum L. The daily water consumption of a wheat culture using atmospheric and soil data

    Directory of Open Access Journals (Sweden)

    Celso Luiz Prevedello

    2007-02-01

    Full Text Available Utilizando técnica de aquisição automática de dados atmosféricos e de teor de água do solo, este trabalho quantificou o consumo diário de água em cultura do trigo em Latossolo Vermelho do município de Ponta Grossa, Estado do Paraná, durante o período de agosto a dezembro de 2003, procurando dar ênfase à contribuição das chuvas e dos fluxos ascendentes de água das camadas mais profundas do solo nesse consumo. Os resultados mostraram que no período monitorado: (a a lâmina média diária de água evapotranspirada pela cultura do trigo foi de 6,75 mm, com o fluxo ascendente de água no perfil de solo contribuindo com 62 % desse total; (b as taxas de evapotranspiração estimadas pelo método de Penman e pela equação do balanço hídrico (pedológico se transladaram no tempo com simetria aproximadamente igual, mas com defasagem aproximada de sete dias, como se o solo respondesse às variações impostas pela atmosfera cerca de uma semana depois; (c as chuvas tiveram efeito importante no armazenamento de água no solo, contribuindo para elevação das taxas evapotranspirativas; e (d pelo fato de o potencial mátrico médio na zona das raízes ter-se apresentado próximo do limite crítico para a cultura, concluiu-se que a irrigação poderia produzir impactos potencialmente positivos para a cultura, por disponibilizar mais água no solo e garantir níveis evapotranspirativos mais altos, como é agronomicamente desejável.The daily water consumption of a wheat culture was quantified on an Oxisoil using atmospheric and soil data measured automatically on an experimental farm in Ponta Grossa, Paraná, Brazil. The measurement period was August through December 2003. The rain contribution to soil moisture and the vertical upward movement of water within the soil were particularly emphasized. Our results show that in the evaluated period (a wheat evapotranspiration amounted to 6.75 mm a day, to which upward water flux contributed with 62

  12. Automated macromolecular crystal detection system and method

    Science.gov (United States)

    Christian, Allen T.; Segelke, Brent; Rupp, Bernard; Toppani, Dominique

    2007-06-05

    An automated macromolecular method and system for detecting crystals in two-dimensional images, such as light microscopy images obtained from an array of crystallization screens. Edges are detected from the images by identifying local maxima of a phase congruency-based function associated with each image. The detected edges are segmented into discrete line segments, which are subsequently geometrically evaluated with respect to each other to identify any crystal-like qualities such as, for example, parallel lines, facing each other, similarity in length, and relative proximity. And from the evaluation a determination is made as to whether crystals are present in each image.

  13. A centralized global automation group in a decentralized organization

    OpenAIRE

    Jeffrey Veitch; Judy Hinderliter-Smith; James Ormand; Jimmy Bruner; Larry Birkemo

    2000-01-01

    In the latter part of the 1990s, many companies have worked to foster a ‘matrix’ style culture through several changes in organizational structure. This type of culture facilitates communication and development of new technology across organizational and global boundaries. At Glaxo Wellcome, this matrix culture is reflected in an automation strategy that relies on both centralized and decentralized resources. The Group Development Operations Information Systems Robotics Team is a centralized ...

  14. A Comparison of Automatic Parallelization Tools/Compilers on the SGI Origin 2000 Using the NAS Benchmarks

    Science.gov (United States)

    Saini, Subhash; Frumkin, Michael; Hribar, Michelle; Jin, Hao-Qiang; Waheed, Abdul; Yan, Jerry

    1998-01-01

    Porting applications to new high performance parallel and distributed computing platforms is a challenging task. Since writing parallel code by hand is extremely time consuming and costly, porting codes would ideally be automated by using some parallelization tools and compilers. In this paper, we compare the performance of the hand written NAB Parallel Benchmarks against three parallel versions generated with the help of tools and compilers: 1) CAPTools: an interactive computer aided parallelization too] that generates message passing code, 2) the Portland Group's HPF compiler and 3) using compiler directives with the native FORTAN77 compiler on the SGI Origin2000.

  15. Extended Field Laser Confocal Microscopy (EFLCM: Combining automated Gigapixel image capture with in silico virtual microscopy

    Directory of Open Access Journals (Sweden)

    Strandh Christer

    2008-07-01

    Full Text Available Abstract Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM. Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA instrument for automated screening processes.

  16. Parallel optical sampler

    Energy Technology Data Exchange (ETDEWEB)

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  17. Materials Testing and Automation

    Science.gov (United States)

    Cooper, Wayne D.; Zweigoron, Ronald B.

    1980-07-01

    The advent of automation in materials testing has been in large part responsible for recent radical changes in the materials testing field: Tests virtually impossible to perform without a computer have become more straightforward to conduct. In addition, standardized tests may be performed with enhanced efficiency and repeatability. A typical automated system is described in terms of its primary subsystems — an analog station, a digital computer, and a processor interface. The processor interface links the analog functions with the digital computer; it includes data acquisition, command function generation, and test control functions. Features of automated testing are described with emphasis on calculated variable control, control of a variable that is computed by the processor and cannot be read directly from a transducer. Three calculated variable tests are described: a yield surface probe test, a thermomechanical fatigue test, and a constant-stress-intensity range crack-growth test. Future developments are discussed.

  18. ADAPTATION OF PARALLEL VIRTUAL MACHINES MECHANISMS TO PARALLEL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Zafer DEMİR

    2001-02-01

    Full Text Available In this study, at first, Parallel Virtual Machine is reviewed. Since It is based upon parallel processing, it is similar to parallel systems in principle in terms of architecture. Parallel Virtual Machine is neither an operating system nor a programming language. It is a specific software tool that supports heterogeneous parallel systems. However, it takes advantage of the features of both to make users close to parallel systems. Since tasks can be executed in parallel on parallel systems by Parallel Virtual Machine, there is an important similarity between PVM and distributed systems and multiple processors. In this study, the relations in question are examined by making use of Master-Slave programming technique. In conclusion, the PVM is tested with a simple factorial computation on a distributed system to observe its adaptation to parallel architects.

  19. Parallel programming with Python

    CERN Document Server

    Palach, Jan

    2014-01-01

    A fast, easy-to-follow and clear tutorial to help you develop Parallel computing systems using Python. Along with explaining the fundamentals, the book will also introduce you to slightly advanced concepts and will help you in implementing these techniques in the real world. If you are an experienced Python programmer and are willing to utilize the available computing resources by parallelizing applications in a simple way, then this book is for you. You are required to have a basic knowledge of Python development to get the most of this book.

  20. The NAS Parallel Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental

  1. A Comparative Study on Serial and Parallel Web Content Mining

    Directory of Open Access Journals (Sweden)

    Binayak Panda

    2016-03-01

    Full Text Available World Wide Web (WWW is such a repository which serves every individuals need starting with the context of education to entertainment etc. But from users point of view getting relevant information with respect to one particular context is time consuming and also not so easy. It is because of the volume of data which is unstructured, distributed and dynamic in nature. There can be automation to extract relevant information with respect to one particular context, which is named as Web Content Mining. The efficiency of automation depends on validity of expected outcome as well as amount of processing time. The acceptability of outcome depends on user or user’s policy. But the amount of processing time depends on the methodology of Web Content Mining. In this work a study has been carried out between Serial Web Content Mining and Parallel Web Content Mining. This work also focuses on the frame work of implementation of parallelism in Web Content Mining.

  2. A MapReduce based Parallel SVM for Email Classification

    Directory of Open Access Journals (Sweden)

    Ke Xu

    2014-06-01

    Full Text Available Support Vector Machine (SVM is a powerful classification and regression tool. Varying approaches including SVM based techniques are proposed for email classification. Automated email classification according to messages or user-specific folders and information extraction from chronologically ordered email streams have become interesting areas in text machine learning research. This paper presents a parallel SVM based on MapReduce (PSMR algorithm for email classification. We discuss the challenges that arise from differences between email foldering and traditional document classification. We show experimental results from an array of automated classification methods and evaluation methodologies, including Naive Bayes, SVM and PSMR method of foldering results on the Enron datasets based on the timeline. By distributing, processing and optimizing the subsets of the training data across multiple participating nodes, the parallel SVM based on MapReduce algorithm reduces the training time significantly

  3. Automated business processes in outbound logistics: An information system perspective

    DEFF Research Database (Denmark)

    Tambo, Torben

    2010-01-01

    This article analyses the potentials and possibilities of changing outbound logistics from highly labour intensive on the information processing side to a more or less fully automated solution. Automation offers advantages in terms of direct labour cost reduction as well as indirect cost reduction...... is not a matter of whether the system can or cannot, but a matter of making a technological and economical best fit. Along the formal implementation issues there is a parallel process focused on a mutuality between IT teams, business users, management and external stakeholders in offering relevant inputs...

  4. Automating the CMS DAQ

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, G.; et al.

    2014-01-01

    We present the automation mechanisms that have been added to the Data Acquisition and Run Control systems of the Compact Muon Solenoid (CMS) experiment during Run 1 of the LHC, ranging from the automation of routine tasks to automatic error recovery and context-sensitive guidance to the operator. These mechanisms helped CMS to maintain a data taking efficiency above 90% and to even improve it to 95% towards the end of Run 1, despite an increase in the occurrence of single-event upsets in sub-detector electronics at high LHC luminosity.

  5. Automated phantom assay system

    International Nuclear Information System (INIS)

    This paper describes an automated phantom assay system developed for assaying phantoms spiked with minute quantities of radionuclides. The system includes a computer-controlled linear-translation table that positions the phantom at exact distances from a spectrometer. A multichannel analyzer (MCA) interfaces with a computer to collect gamma spectral data. Signals transmitted between the controller and MCA synchronize data collection and phantom positioning. Measured data are then stored on disk for subsequent analysis. The automated system allows continuous unattended operation and ensures reproducible results

  6. Automated gas chromatography

    Science.gov (United States)

    Mowry, Curtis D.; Blair, Dianna S.; Rodacy, Philip J.; Reber, Stephen D.

    1999-01-01

    An apparatus and process for the continuous, near real-time monitoring of low-level concentrations of organic compounds in a liquid, and, more particularly, a water stream. A small liquid volume of flow from a liquid process stream containing organic compounds is diverted by an automated process to a heated vaporization capillary where the liquid volume is vaporized to a gas that flows to an automated gas chromatograph separation column to chromatographically separate the organic compounds. Organic compounds are detected and the information transmitted to a control system for use in process control. Concentrations of organic compounds less than one part per million are detected in less than one minute.

  7. Parallel universes beguile science

    CERN Multimedia

    2007-01-01

    A staple of mind-bending science fiction, the possibility of multiple universes has long intrigued hard-nosed physicists, mathematicians and cosmologists too. We may not be able -- as least not yet -- to prove they exist, many serious scientists say, but there are plenty of reasons to think that parallel dimensions are more than figments of eggheaded imagination.

  8. Practical parallel programming

    CERN Document Server

    Bauer, Barr E

    2014-01-01

    This is the book that will teach programmers to write faster, more efficient code for parallel processors. The reader is introduced to a vast array of procedures and paradigms on which actual coding may be based. Examples and real-life simulations using these devices are presented in C and FORTRAN.

  9. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  10. Optical parallel selectionist systems

    Science.gov (United States)

    Caulfield, H. John

    1993-01-01

    There are at least two major classes of computers in nature and technology: connectionist and selectionist. A subset of connectionist systems (Turing Machines) dominates modern computing, although another subset (Neural Networks) is growing rapidly. Selectionist machines have unique capabilities which should allow them to do truly creative operations. It is possible to make a parallel optical selectionist system using methods describes in this paper.

  11. Parallel hierarchical global illumination

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  12. Parallel Fast Legendre Transform

    NARCIS (Netherlands)

    Alves de Inda, M.; Bisseling, R.H.; Maslen, D.K.

    2001-01-01

    We discuss a parallel implementation of a fast algorithm for the discrete polynomial Legendre transform We give an introduction to the DriscollHealy algorithm using polynomial arithmetic and present experimental results on the eciency and accuracy of our implementation The algorithms were implemente

  13. Automated solvent concentrator

    Science.gov (United States)

    Griffith, J. S.; Stuart, J. L.

    1976-01-01

    Designed for automated drug identification system (AUDRI), device increases concentration by 100. Sample is first filtered, removing particulate contaminants and reducing water content of sample. Sample is extracted from filtered residue by specific solvent. Concentrator provides input material to analysis subsystem.

  14. Protokoller til Home Automation

    DEFF Research Database (Denmark)

    Kjær, Kristian Ellebæk

    2008-01-01

    computer, der kan skifte mellem foruddefinerede indstillinger. Nogle gange kan computeren fjernstyres over internettet, så man kan se hjemmets status fra en computer eller måske endda fra en mobiltelefon. Mens nævnte anvendelser er klassiske indenfor home automation, er yderligere funktionalitet dukket op...

  15. ELECTROPNEUMATIC AUTOMATION EDUCATIONAL LABORATORY

    OpenAIRE

    Dolgorukov, S. O.; National Aviation University; Roman, B. V.; National Aviation University

    2013-01-01

    The article reflects current situation in education regarding mechatronics learning difficulties. Com-plex of laboratory test benches on electropneumatic automation are considered as a tool in advancing through technical science. Course of laboratory works developed to meet the requirement of efficient and reliable way of practical skills acquisition is regarded the simplest way for students to learn the ba-sics of mechatronics.

  16. Building Automation Systems.

    Science.gov (United States)

    Honeywell, Inc., Minneapolis, Minn.

    A number of different automation systems for use in monitoring and controlling building equipment are described in this brochure. The system functions include--(1) collection of information, (2) processing and display of data at a central panel, and (3) taking corrective action by sounding alarms, making adjustments, or automatically starting and…

  17. Test Construction: Automated

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    2014-01-01

    Optimal test construction deals with automated assembly of tests for educational and psychological measurement. Items are selected from an item bank to meet a predefined set of test specifications. Several models for optimal test construction are presented, and two algorithms for optimal test assemb

  18. Test Construction: Automated

    NARCIS (Netherlands)

    Veldkamp, Bernard P.

    2016-01-01

    Optimal test construction deals with automated assembly of tests for educational and psychological measurement. Items are selected from an item bank to meet a predefined set of test specifications. Several models for optimal test construction are presented, and two algorithms for optimal test assemb

  19. Automated Web Applications Testing

    Directory of Open Access Journals (Sweden)

    Alexandru Dan CĂPRIŢĂ

    2009-01-01

    Full Text Available Unit tests are a vital part of several software development practicesand processes such as Test-First Programming, Extreme Programming andTest-Driven Development. This article shortly presents the software quality andtesting concepts as well as an introduction to an automated unit testingframework for PHP web based applications.

  20. Automated Student Model Improvement

    Science.gov (United States)

    Koedinger, Kenneth R.; McLaughlin, Elizabeth A.; Stamper, John C.

    2012-01-01

    Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational…

  1. Myths in test automation

    Directory of Open Access Journals (Sweden)

    Jazmine Francis

    2015-01-01

    Full Text Available Myths in automation of software testing is an issue of discussion that echoes about the areas of service in validation of software industry. Probably, the first though that appears in knowledgeable reader would be Why this old topic again? What's New to discuss the matter? But, for the first time everyone agrees that undoubtedly automation testing today is not today what it used to be ten or fifteen years ago, because it has evolved in scope and magnitude. What began as a simple linear scripts for web applications today has a complex architecture and a hybrid framework to facilitate the implementation of testing applications developed with various platforms and technologies. Undoubtedly automation has advanced, but so did the myths associated with it. The change in perspective and knowledge of people on automation has altered the terrain. This article reflects the points of views and experience of the author in what has to do with the transformation of the original myths in new versions, and how they are derived; also provides his thoughts on the new generation of myths.

  2. Parallel processing of remotely sensed data: Application to the ATSR-2 instrument

    Science.gov (United States)

    Simpson, J.; McIntire, T.; Berg, J.; Tsou, Y.

    2007-01-01

    Massively parallel computational paradigms can mitigate many issues associated with the analysis of large and complex remotely sensed data sets. Recently, the Beowulf cluster has emerged as the most attractive, massively parallel architecture due to its low cost and high performance. Whereas most Beowulf designs have emphasized numerical modeling applications, the Parallel Image Processing Environment (PIPE) specifically addresses the unique requirements of remote sensing applications. Automated, parallelization of user-defined analyses is fully supported. A neural network application, applied to Along Track Scanning Radiometer-2 (ATSR-2) data shows the advantages and performance characteristics of PIPE.

  3. Embryoid Body-Explant Outgrowth Cultivation from Induced Pluripotent Stem Cells in an Automated Closed Platform

    Science.gov (United States)

    Tone, Hiroshi; Yoshioka, Saeko; Akiyama, Hirokazu; Nishimura, Akira; Ichimura, Masaki; Nakatani, Masaru; Kiyono, Tohru

    2016-01-01

    Automation of cell culture would facilitate stable cell expansion with consistent quality. In the present study, feasibility of an automated closed-cell culture system “P 4C S” for an embryoid body- (EB-) explant outgrowth culture was investigated as a model case for explant culture. After placing the induced pluripotent stem cell- (iPSC-) derived EBs into the system, the EBs successfully adhered to the culture surface and the cell outgrowth was clearly observed surrounding the adherent EBs. After confirming the outgrowth, we carried out subculture manipulation, in which the detached cells were simply dispersed by shaking the culture flask, leading to uniform cell distribution. This enabled continuous stable cell expansion, resulting in a cell yield of 3.1 × 107. There was no evidence of bacterial contamination throughout the cell culture experiments. We herewith developed the automated cultivation platform for EB-explant outgrowth cells. PMID:27648449

  4. Embryoid Body-Explant Outgrowth Cultivation from Induced Pluripotent Stem Cells in an Automated Closed Platform

    Science.gov (United States)

    Tone, Hiroshi; Yoshioka, Saeko; Akiyama, Hirokazu; Nishimura, Akira; Ichimura, Masaki; Nakatani, Masaru; Kiyono, Tohru

    2016-01-01

    Automation of cell culture would facilitate stable cell expansion with consistent quality. In the present study, feasibility of an automated closed-cell culture system “P 4C S” for an embryoid body- (EB-) explant outgrowth culture was investigated as a model case for explant culture. After placing the induced pluripotent stem cell- (iPSC-) derived EBs into the system, the EBs successfully adhered to the culture surface and the cell outgrowth was clearly observed surrounding the adherent EBs. After confirming the outgrowth, we carried out subculture manipulation, in which the detached cells were simply dispersed by shaking the culture flask, leading to uniform cell distribution. This enabled continuous stable cell expansion, resulting in a cell yield of 3.1 × 107. There was no evidence of bacterial contamination throughout the cell culture experiments. We herewith developed the automated cultivation platform for EB-explant outgrowth cells.

  5. Automating spectral measurements

    Science.gov (United States)

    Goldstein, Fred T.

    2008-09-01

    This paper discusses the architecture of software utilized in spectroscopic measurements. As optical coatings become more sophisticated, there is mounting need to automate data acquisition (DAQ) from spectrophotometers. Such need is exacerbated when 100% inspection is required, ancillary devices are utilized, cost reduction is crucial, or security is vital. While instrument manufacturers normally provide point-and-click DAQ software, an application programming interface (API) may be missing. In such cases automation is impossible or expensive. An API is typically provided in libraries (*.dll, *.ocx) which may be embedded in user-developed applications. Users can thereby implement DAQ automation in several Windows languages. Another possibility, developed by FTG as an alternative to instrument manufacturers' software, is the ActiveX application (*.exe). ActiveX, a component of many Windows applications, provides means for programming and interoperability. This architecture permits a point-and-click program to act as automation client and server. Excel, for example, can control and be controlled by DAQ applications. Most importantly, ActiveX permits ancillary devices such as barcode readers and XY-stages to be easily and economically integrated into scanning procedures. Since an ActiveX application has its own user-interface, it can be independently tested. The ActiveX application then runs (visibly or invisibly) under DAQ software control. Automation capabilities are accessed via a built-in spectro-BASIC language with industry-standard (VBA-compatible) syntax. Supplementing ActiveX, spectro-BASIC also includes auxiliary serial port commands for interfacing programmable logic controllers (PLC). A typical application is automatic filter handling.

  6. Parallel computers and parallel algorithms for CFD: An introduction

    Science.gov (United States)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  7. To Parallelize or Not to Parallelize, Speed Up Issue

    Directory of Open Access Journals (Sweden)

    Alaa Ismail El-Nashar

    2011-03-01

    Full Text Available Running parallel applications requires special and expensive processing resources to obtain the requiredresults within a reasonable time. Before parallelizing serial applications, some analysis is recommendedto be carried out to decide whether it will benefit from parallelization or not. In this paper we discuss theissue of speed up gained from parallelization using Message Passing Interface (MPI to compromisebetween the overhead of parallelization cost and the gained parallel speed up. We also propose anexperimental method to predict the speed up of MPI applications.

  8. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  9. Ultrascalable petaflop parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  10. PARALLEL MOVING MECHANICAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Florian Ion Tiberius Petrescu

    2014-09-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Moving mechanical systems parallel structures are solid, fast, and accurate. Between parallel systems it is to be noticed Stewart platforms, as the oldest systems, fast, solid and precise. The work outlines a few main elements of Stewart platforms. Begin with the geometry platform, kinematic elements of it, and presented then and a few items of dynamics. Dynamic primary element on it means the determination mechanism kinetic energy of the entire Stewart platforms. It is then in a record tail cinematic mobile by a method dot matrix of rotation. If a structural mottoelement consists of two moving elements which translates relative, drive train and especially dynamic it is more convenient to represent the mottoelement as a single moving components. We have thus seven moving parts (the six motoelements or feet to which is added mobile platform 7 and one fixed.

  11. Parallel clustering with CFinder

    CERN Document Server

    Pollner, Peter; Vicsek, Tamas; 10.1142/S0129626412400014

    2012-01-01

    The amount of available data about complex systems is increasing every year, measurements of larger and larger systems are collected and recorded. A natural representation of such data is given by networks, whose size is following the size of the original system. The current trend of multiple cores in computing infrastructures call for a parallel reimplementation of earlier methods. Here we present the grid version of CFinder, which can locate overlapping communities in directed, weighted or undirected networks based on the clique percolation method (CPM). We show that the computation of the communities can be distributed among several CPU-s or computers. Although switching to the parallel version not necessarily leads to gain in computing time, it definitely makes the community structure of extremely large networks accessible.

  12. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  13. TAO par syntaxeur parallele

    OpenAIRE

    Diaz, C.

    1990-01-01

    In remote control, the master element which the user operates looks for practical and historical reasons like the slave arm and therefore features a series architecture, with a few drawbacks in terms of mass, dimensions, rigidity and mechanical complexity. To remedy these defects, we are now introducing a new master element with parallel kinematics. This syntactor, derived from Steward's manipulators, has six degrees of freedom and comprises six motor-driven links arranged on a fixed plate (t...

  14. Xyce parallel electronic simulator.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd S; Pawlowski, Roger P; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  15. Parallel programming with MPI

    International Nuclear Information System (INIS)

    MPI is a practical, portable, efficient and flexible standard for message passing, which has been implemented on most MPPs and network of workstations by machine vendors, universities and national laboratories. MPI avoids specifying how operations will take place and superfluous work to achieve efficiency as well as portability, and is also designed to encourage overlapping communication and computation to hide communication latencies. This presentation briefly explains the MPI standard, and comments on efficient parallel programming to improve performance. (author)

  16. Stability of parallel flows

    CERN Document Server

    Betchov, R

    2012-01-01

    Stability of Parallel Flows provides information pertinent to hydrodynamical stability. This book explores the stability problems that occur in various fields, including electronics, mechanics, oceanography, administration, economics, as well as naval and aeronautical engineering. Organized into two parts encompassing 10 chapters, this book starts with an overview of the general equations of a two-dimensional incompressible flow. This text then explores the stability of a laminar boundary layer and presents the equation of the inviscid approximation. Other chapters present the general equation

  17. Parallel programming with MPI

    Energy Technology Data Exchange (ETDEWEB)

    Tatebe, Osamu [Electrotechnical Lab., Tsukuba, Ibaraki (Japan)

    1998-03-01

    MPI is a practical, portable, efficient and flexible standard for message passing, which has been implemented on most MPPs and network of workstations by machine vendors, universities and national laboratories. MPI avoids specifying how operations will take place and superfluous work to achieve efficiency as well as portability, and is also designed to encourage overlapping communication and computation to hide communication latencies. This presentation briefly explains the MPI standard, and comments on efficient parallel programming to improve performance. (author)

  18. Parallel computing techniques

    OpenAIRE

    Nakano, Junji

    2004-01-01

    Parallel computing means to divide a job into several tasks and use more than one processor simultaneously to perform these tasks. Assume you have developed a new estimation method for the parameters of a complicated statistical model. After you prove the asymptotic characteristics of the method (for instance, asymptotic distribution of the estimator), you wish to perform many simulations to assure the goodness of the method for reasonable numbers of data values and for different values of pa...

  19. Controlled Fuzzy Parallel Rewriting

    OpenAIRE

    Asveld, Peter R.J.

    1996-01-01

    We study a Lindenmayer-like parallel rewriting system to model the growth of filaments (arrays of cells) in which developmental errors may occur. In essence this model is the fuzzy analogue of the derivation-controlled iteration grammar. Under minor assumptions on the family of control languages and on the family of fuzzy languages in the underlying iteration grammar, we show that (i) regular control does not provide additional generating power to the model, (ii) the number of fuzzy substitut...

  20. Controlled Fuzzy Parallel Rewriting

    OpenAIRE

    Asveld, Peter R.J.; Paun, G.; Salomaa, A

    1997-01-01

    We study a Lindenmayer-like parallel rewriting system to model the growth of filaments (arrays of cells) in which developmental errors may occur. In essence this model is the fuzzy analogue of the derivation-controlled iteration grammar. Under minor assumptions on the family of control languages and on the family of fuzzy languages in the underlying iteration grammar, we show (i) regular control does not provide additional generating power to the model, (ii) the number of fuzzy substitutions ...

  1. Parallel Feature Extraction System

    Institute of Scientific and Technical Information of China (English)

    MAHuimin; WANGYan

    2003-01-01

    Very high speed image processing is needed in some application specially for weapon. In this paper, a high speed image feature extraction system with parallel structure was implemented by Complex programmable logic device (CPLD), and it can realize image feature extraction in several microseconds almost with no delay. This system design is presented by an application instance of flying plane, whose infrared image includes two kinds of feature: geometric shape feature in the binary image and temperature-feature in the gray image. Accordingly the feature extraction is taken on the two kind features. Edge and area are two most important features of the image. Angle often exists in the connection of the different parts of the target's image, which indicates that one area ends and the other area begins. The three key features can form the whole presentation of an image. So this parallel feature extraction system includes three processing modules: edge extraction, angle extraction and area extraction. The parallel structure is realized by a group of processors, every detector is followed by one route of processor, every route has the same circuit form, and works together at the same time controlled by a set of clock to realize feature extraction. The extraction system has simple structure, small volume, high speed, and better stability against noise. It can be used in the war field recognition system.

  2. Cultural Resources, RecreationAreasESRI-This data set represents the recreational areas found in Utah, including campgrounds, golf courses and ski resorts., Published in 2001, Smaller than 1:100000 scale, State of Utah Automated Geographic Reference Center.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Cultural Resources dataset, published at Smaller than 1:100000 scale, was produced all or in part from Published Reports/Deeds information as of 2001. It is...

  3. Rapid automated nuclear chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, R.A.

    1979-05-31

    Rapid Automated Nuclear Chemistry (RANC) can be thought of as the Z-separation of Neutron-rich Isotopes by Automated Methods. The range of RANC studies of fission and its products is large. In a sense, the studies can be categorized into various energy ranges from the highest where the fission process and particle emission are considered, to low energies where nuclear dynamics are being explored. This paper presents a table which gives examples of current research using RANC on fission and fission products. The remainder of this text is divided into three parts. The first contains a discussion of the chemical methods available for the fission product elements, the second describes the major techniques, and in the last section, examples of recent results are discussed as illustrations of the use of RANC.

  4. Automated theorem proving.

    Science.gov (United States)

    Plaisted, David A

    2014-03-01

    Automated theorem proving is the use of computers to prove or disprove mathematical or logical statements. Such statements can express properties of hardware or software systems, or facts about the world that are relevant for applications such as natural language processing and planning. A brief introduction to propositional and first-order logic is given, along with some of the main methods of automated theorem proving in these logics. These methods of theorem proving include resolution, Davis and Putnam-style approaches, and others. Methods for handling the equality axioms are also presented. Methods of theorem proving in propositional logic are presented first, and then methods for first-order logic. WIREs Cogn Sci 2014, 5:115-128. doi: 10.1002/wcs.1269 CONFLICT OF INTEREST: The authors has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26304304

  5. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  6. Rapid automated nuclear chemistry

    International Nuclear Information System (INIS)

    Rapid Automated Nuclear Chemistry (RANC) can be thought of as the Z-separation of Neutron-rich Isotopes by Automated Methods. The range of RANC studies of fission and its products is large. In a sense, the studies can be categorized into various energy ranges from the highest where the fission process and particle emission are considered, to low energies where nuclear dynamics are being explored. This paper presents a table which gives examples of current research using RANC on fission and fission products. The remainder of this text is divided into three parts. The first contains a discussion of the chemical methods available for the fission product elements, the second describes the major techniques, and in the last section, examples of recent results are discussed as illustrations of the use of RANC

  7. The Automated Medical Office

    OpenAIRE

    Petreman, Mel

    1990-01-01

    With shock and surprise many physicians learned in the 1980s that they must change the way they do business. Competition for patients, increasing government regulation, and the rapidly escalating risk of litigation forces physicians to seek modern remedies in office management. The author describes a medical clinic that strives to be paperless using electronic innovation to solve the problems of medical practice management. A computer software program to automate information management in a c...

  8. Automation in biological crystallization.

    Science.gov (United States)

    Stewart, Patrick Shaw; Mueller-Dieckmann, Jochen

    2014-06-01

    Crystallization remains the bottleneck in the crystallographic process leading from a gene to a three-dimensional model of the encoded protein or RNA. Automation of the individual steps of a crystallization experiment, from the preparation of crystallization cocktails for initial or optimization screens to the imaging of the experiments, has been the response to address this issue. Today, large high-throughput crystallization facilities, many of them open to the general user community, are capable of setting up thousands of crystallization trials per day. It is thus possible to test multiple constructs of each target for their ability to form crystals on a production-line basis. This has improved success rates and made crystallization much more convenient. High-throughput crystallization, however, cannot relieve users of the task of producing samples of high quality. Moreover, the time gained from eliminating manual preparations must now be invested in the careful evaluation of the increased number of experiments. The latter requires a sophisticated data and laboratory information-management system. A review of the current state of automation at the individual steps of crystallization with specific attention to the automation of optimization is given.

  9. El general de brigada es um tipo de caramelo – tradução automática e aprendizagem cultural.
    DOI: 10.5007/2175-7968.2011v1n27p243

    OpenAIRE

    Nylcea Thereza de Siqueira Pedra; Ruth Bohunovsky

    2011-01-01

    O presente artigo tem por objetivo averiguar o papel assumido pela tradução (automática) no ensino/aprendizagem de línguas estrangeiras. Para tanto, discute-se, primeiramente, diferentes funções didáticas atribuídas à tradução. Entre elas, destaca-se a que objetiva a “aprendizagem cultural”, visando à sensibilização dos aprendizes para aspectos culturais relacionados à língua. Recorrendo a alguns teóricos, problematiza-se o uso de conceitos como “cultura”, “língua” e “tradução” e encontra-se ...

  10. Resistor Combinations for Parallel Circuits.

    Science.gov (United States)

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  11. Parallel External Memory Graph Algorithms

    DEFF Research Database (Denmark)

    Arge, Lars Allan; Goodrich, Michael T.; Sitchinava, Nodari

    2010-01-01

    In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest...... an optimal speedup of ¿(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts....

  12. Parallel processing approaches in robotics

    OpenAIRE

    Henrich, Dominik; Höniger, Thomas

    1997-01-01

    This paper presents the different possibilities for parallel processing in robot control architectures. At the beginning, we shortly review the historic development of control architectures. Then, a list of requirements for control architectures is set up from a parallel processing point of view. As our main topic, we identify the levels of parallel processing in robot control architectures. With each level of parallelism, examples for a typical robot control architecture are presented. Final...

  13. Parallel Repetition From Fortification

    OpenAIRE

    Moshkovitz Aaronson, Dana Hadar

    2014-01-01

    The Parallel Repetition Theorem upper-bounds the value of a repeated (tensored) two prover game in terms of the value of the base game and the number of repetitions. In this work we give a simple transformation on games – “fortification” – and show that for fortified games, the value of the repeated game decreases perfectly exponentially with the number of repetitions, up to an arbitrarily small additive error. Our proof is combinatorial and short. As corollaries, we obtain: (1) Starting from...

  14. Synchronous Parallel Kinetic Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Mart?nez, E; Marian, J; Kalos, M H

    2006-12-14

    A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

  15. CSM parallel structural methods research

    Science.gov (United States)

    Storaasli, Olaf O.

    1989-01-01

    Parallel structural methods, research team activities, advanced architecture computers for parallel computational structural mechanics (CSM) research, the FLEX/32 multicomputer, a parallel structural analyses testbed, blade-stiffened aluminum panel with a circular cutout and the dynamic characteristics of a 60 meter, 54-bay, 3-longeron deployable truss beam are among the topics discussed.

  16. Theory of Parallel Mechanisms

    CERN Document Server

    Huang, Zhen; Ding, Huafeng

    2013-01-01

    This book contains mechanism analysis and synthesis. In mechanism analysis, a mobility methodology is first systematically presented. This methodology, based on the author's screw theory, proposed in 1997, of which the generality and validity was only proved recently,  is a very complex issue, researched by various scientists over the last 150 years. The principle of kinematic influence coefficient and its latest developments are described. This principle is suitable for kinematic analysis of various 6-DOF and lower-mobility parallel manipulators. The singularities are classified by a new point of view, and progress in position-singularity and orientation-singularity is stated. In addition, the concept of over-determinate input is proposed and a new method of force analysis based on screw theory is presented. In mechanism synthesis, the synthesis for spatial parallel mechanisms is discussed, and the synthesis method of difficult 4-DOF and 5-DOF symmetric mechanisms, which was first put forward by the a...

  17. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  18. Anti-parallel triplexes

    DEFF Research Database (Denmark)

    Kosbar, Tamer R.; Sofan, Mamdouh A.; Waly, Mohamed A.;

    2015-01-01

    -parallel TFO strand was modified with Y with one or two insertions at the end of the TFO strand, the thermal stability was increased 1.2 °C and 3 °C at pH 7.2, respectively, whereas one insertion in the middle of the TFO strand decreased the thermal stability 1.4 °C compared to the wild type oligonucleotide......-1-yl chain, especially at the end of the TFO strand. On the other hand, the thermal stability of the anti-parallel triplex was dramatically decreased when the TFO strand was modified with the LNA monomer analog Z in the middle of the TFO strand (ΔTm = -9.1 °C). Also the thermal stability...... decreased about 6.1 °C when the TFO strand was modified with Z and the Watson-Crick strand with adenine-LNA (AL). The molecular modeling results showed that, in case of nucleobases Y and Z a hydrogen bond (1.69 and 1.72 Å, respectively) was formed between the protonated 3-aminopropyn-1-yl chain...

  19. Massively Parallel QCD

    International Nuclear Information System (INIS)

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  20. Toward the automation of road networks extraction processes

    Science.gov (United States)

    Leymarie, Frederic; Boichis, Nicolas; Airault, Sylvain; Jamet, Olivier

    1996-12-01

    Syseca and IGN are working on various steps in the ongoing march from digital photogrammetry to the semi-automation and ultimately the full automation of data manipulation, i.e., capture and analysis. The immediate goals are to reduce the production costs and the data availability delays. Within this context, we have tackle the distinctive problem of 'automated road network extraction.' The methodology adopted is to first study semi-automatic solutions which probably increase the global efficiency of human operators in topographic data capture; in a second step, automatic solutions are designed based upon the gained experience. We report on different (semi-)automatic solutions for the road following algorithm. One key aspect of our method is to have the stages of 'detection' and 'geometric recovery' cooperate together while remaining distinct. 'Detection' is based on a local (texture) analysis of the image, while 'geometric recovery' is concerned with the extraction of 'road objects' for both monocular and stereo information. 'Detection' is a low-level visual process, 'reasoning' directly at the level of image intensities, while the mid-level visual process, 'geometric recovery', uses contextual knowledge about roads, both generic, e.g. parallelism of borders, and specific, e.g. using previously extracted road segments and disparities. We then pursue our 'march' by reporting on steps we are exploring toward full automation. We have in particular made attempts at tackling the automation of the initialization step to start searching in a valid direction.

  1. Automation in organizations: Eternal conflict

    Science.gov (United States)

    Dieterly, D. L.

    1981-01-01

    Some ideas on and insights into the problems associated with automation in organizations are presented with emphasis on the concept of automation, its relationship to the individual, and its impact on system performance. An analogy is drawn, based on an American folk hero, to emphasize the extent of the problems encountered when dealing with automation within an organization. A model is proposed to focus attention on a set of appropriate dimensions. The function allocation process becomes a prominent aspect of the model. The current state of automation research is mentioned in relation to the ideas introduced. Proposed directions for an improved understanding of automation's effect on the individual's efficiency are discussed. The importance of understanding the individual's perception of the system in terms of the degree of automation is highlighted.

  2. Automated Assessment, Face to Face

    OpenAIRE

    Rizik M. H. Al-Sayyed; Amjad Hudaib; Muhannad AL-Shboul; Yousef Majdalawi; Mohammed Bataineh

    2010-01-01

    This research paper evaluates the usability of automated exams and compares them with the paper-and-pencil traditional ones. It presents the results of a detailed study conducted at The University of Jordan (UoJ) that comprised students from 15 faculties. A set of 613 students were asked about their opinions concerning automated exams; and their opinions were deeply analyzed. The results indicate that most students reported that they are satisfied with using automated exams but they have sugg...

  3. Automation System Products and Research

    OpenAIRE

    Rintala, Mikko; Sormunen, Jussi; Kuisma, Petri; Rahkala, Matti

    2014-01-01

    Automation systems are used in most buildings nowadays. In the past they were mainly used in industry to control and monitor critical systems. During the past few decades the automation systems have become more common and are used today from big industrial solutions to homes of private customers. With the growing need for ecologic and cost-efficient management systems, home and building automation systems are becoming a standard way of controlling lighting, ventilation, heating etc. Auto...

  4. Test Automation of Online Games

    OpenAIRE

    Schoenfeldt, Alexander

    2015-01-01

    State of the art browser games are increasingly complex pieces of software with extensive code basis. With increasing complexity, a software becomes harder to maintain. Automated regression testing can simplify these maintenance processes and thereby enable developers as well as testers to spend their workforce more efficiently. This thesis addresses the utilization of automated tests in web applications. As a use case test automation is applied to an online-based strategy game for the bro...

  5. Computer Assisted Parallel Program Generation

    CERN Document Server

    Kawata, Shigeo

    2015-01-01

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  6. Mechatronic Design Automation

    DEFF Research Database (Denmark)

    Fan, Zhun

    This book proposes a novel design method that combines both genetic programming (GP) to automatically explore the open-ended design space and bond graphs (BG) to unify design representations of multi-domain Mechatronic systems. Results show that the method, formally called GPBG method, can...... successfully design analogue filters, vibration absorbers, micro-electro-mechanical systems, and vehicle suspension systems, all in an automatic or semi-automatic way. It also investigates the very important issue of co-designing plant-structures and dynamic controllers in automated design of Mechatronic...

  7. The automated medical office.

    Science.gov (United States)

    Petreman, M

    1990-08-01

    With shock and surprise many physicians learned in the 1980s that they must change the way they do business. Competition for patients, increasing government regulation, and the rapidly escalating risk of litigation forces physicians to seek modern remedies in office management. The author describes a medical clinic that strives to be paperless using electronic innovation to solve the problems of medical practice management. A computer software program to automate information management in a clinic shows that practical thinking linked to advanced technology can greatly improve office efficiency.

  8. AUTOMATED API TESTING APPROACH

    Directory of Open Access Journals (Sweden)

    SUNIL L. BANGARE

    2012-02-01

    Full Text Available Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. With the help of software testing we can verify or validate the software product. Normally testing will be done after development of software but we can perform the software testing at the time of development process also. This paper will give you a brief introduction about Automated API Testing Tool. This tool of testing will reduce lots of headache after the whole development of software. It saves time as well as money. Such type of testing is helpful in the Industries & Colleges also.

  9. World-wide distribution automation systems

    Energy Technology Data Exchange (ETDEWEB)

    Devaney, T.M.

    1994-12-31

    A worldwide power distribution automation system is outlined. Distribution automation is defined and the status of utility automation is discussed. Other topics discussed include a distribution management system, substation feeder, and customer functions, potential benefits, automation costs, planning and engineering considerations, automation trends, databases, system operation, computer modeling of system, and distribution management systems.

  10. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security. PMID:27184813

  11. Accelerated Parallel Texture Optimization

    Institute of Scientific and Technical Information of China (English)

    Hao-Da Huang; Xin Tong; Wen-Cheng Wang

    2007-01-01

    Texture optimization is a texture synthesis method that can efficiently reproduce various features of exemplar textures. However, its slow synthesis speed limits its usage in many interactive or real time applications. In this paper, we propose a parallel texture optimization algorithm to run on GPUs. In our algorithm, k-coherence search and principle component analysis (PCA) are used for hardware acceleration, and two acceleration techniques are further developed to speed up our GPU-based texture optimization. With a reasonable precomputation cost, the online synthesis speed of our algorithm is 4000+ times faster than that of the original texture optimization algorithm and thus our algorithm is capable of interactive applications. The advantages of the new scheme are demonstrated by applying it to interactive editing of flow-guided synthesis.

  12. Parallel Polarization State Generation

    CERN Document Server

    She, Alan

    2016-01-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristi...

  13. Parallel Polarization State Generation

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-01

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  14. Sunglint Detection for Unmanned and Automated Platforms

    Directory of Open Access Journals (Sweden)

    Oliver Zielinski

    2012-09-01

    Full Text Available We present an empirical quality control protocol for above-water radiometric sampling focussing on identifying sunglint situations. Using hyperspectral radiometers, measurements were taken on an automated and unmanned seaborne platform in northwest European shelf seas. In parallel, a camera system was used to capture sea surface and sky images of the investigated points. The quality control consists of meteorological flags, to mask dusk, dawn, precipitation and low light conditions, utilizing incoming solar irradiance (ES spectra. Using 629 from a total of 3,121 spectral measurements that passed the test conditions of the meteorological flagging, a new sunglint flag was developed. To predict sunglint conspicuous in the simultaneously available sea surface images a sunglint image detection algorithm was developed and implemented. Applying this algorithm, two sets of data, one with (having too much or detectable white pixels or sunglint and one without sunglint (having least visible/detectable white pixel or sunglint, were derived. To identify the most effective sunglint flagging criteria we evaluated the spectral characteristics of these two data sets using water leaving radiance (LW and remote sensing reflectance (RRS. Spectral conditions satisfying ‘mean LW (700–950 nm < 2 mW∙m−2∙nm−1∙Sr−1’ or alternatively ‘minimum RRS (700–950 nm < 0.010 Sr−1’, mask most measurements affected by sunglint, providing an efficient empirical flagging of sunglint in automated quality control.

  15. Automated landmark-guided deformable image registration

    Science.gov (United States)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  16. To parallelize or not to parallelize, bugs issue

    OpenAIRE

    El-Nashar, Alaa I.; Masaki, Nakamura

    2013-01-01

    Program correctness is one of the most difficult challenges in parallel programming. Message Passing Interface MPI is widely used in writing parallel applications. Since MPI is not a compiled language, the programmer will be enfaced with several programming bugs.This paper presents the most common programming bugs arise in MPI programs to help the programmer to compromise between the advantage of parallelism and the extra effort needed to detect and fix such bugs. An algebraic specification o...

  17. About Parallel Programming: Paradigms, Parallel Execution and Collaborative Systems

    Directory of Open Access Journals (Sweden)

    Loredana MOCEAN

    2009-01-01

    Full Text Available In the last years, there were made efforts for delineation of a stabile and unitary frame, where the problems of logical parallel processing must find solutions at least at the level of imperative languages. The results obtained by now are not at the level of the made efforts. This paper wants to be a little contribution at these efforts. We propose an overview in parallel programming, parallel execution and collaborative systems.

  18. Automation from pictures

    International Nuclear Information System (INIS)

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. (author)

  19. Automated Test Case Generation

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I would like to present the concept of automated test case generation. I work on it as part of my PhD and I think it would be interesting also for other people. It is also the topic of a workshop paper that I am introducing in Paris. (abstract below) Please note that the talk itself would be more general and not about the specifics of my PhD, but about the broad field of Automated Test Case Generation. I would introduce the main approaches (combinatorial testing, symbolic execution, adaptive random testing) and their advantages and problems. (oracle problem, combinatorial explosion, ...) Abstract of the paper: Over the last decade code-based test case generation techniques such as combinatorial testing or dynamic symbolic execution have seen growing research popularity. Most algorithms and tool implementations are based on finding assignments for input parameter values in order to maximise the execution branch coverage. Only few of them consider dependencies from outside the Code Under Test’s scope such...

  20. Automated Postediting of Documents

    CERN Document Server

    Knight, K; Knight, Kevin; Chander, Ishwar

    1994-01-01

    Large amounts of low- to medium-quality English texts are now being produced by machine translation (MT) systems, optical character readers (OCR), and non-native speakers of English. Most of this text must be postedited by hand before it sees the light of day. Improving text quality is tedious work, but its automation has not received much research attention. Anyone who has postedited a technical report or thesis written by a non-native speaker of English knows the potential of an automated postediting system. For the case of MT-generated text, we argue for the construction of postediting modules that are portable across MT systems, as an alternative to hardcoding improvements inside any one system. As an example, we have built a complete self-contained postediting module for the task of article selection (a, an, the) for English noun phrases. This is a notoriously difficult problem for Japanese-English MT. Our system contains over 200,000 rules derived automatically from online text resources. We report on l...

  1. Maneuver Automation Software

    Science.gov (United States)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; Illsley, Jeannette

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  2. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    Science.gov (United States)

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-01

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  3. Get smart! automate your house!

    NARCIS (Netherlands)

    Van Amstel, P.; Gorter, N.; De Rouw, J.

    2016-01-01

    This "designers' manual" is made during the TIDO-course AR0531 Innovation and Sustainability This manual will help you in reducing both energy usage and costs by automating your home. It gives an introduction to a number of home automation systems that every homeowner can install.

  4. Automated Methods Of Corrosion Measurements

    DEFF Research Database (Denmark)

    Bech-Nielsen, Gregers; Andersen, Jens Enevold Thaulov; Reeve, John Ch;

    1997-01-01

    The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell.......The chapter describes the following automated measurements: Corrosion Measurements by Titration, Imaging Corrosion by Scanning Probe Microscopy, Critical Pitting Temperature and Application of the Electrochemical Hydrogen Permeation Cell....

  5. Automated separation for heterogeneous immunoassays

    OpenAIRE

    Truchaud, A.; Barclay, J; Yvert, J. P.; Capolaghi, B.

    1991-01-01

    Beside general requirements for modern automated systems, immunoassay automation involves specific requirements as a separation step for heterogeneous immunoassays. Systems are designed according to the solid phase selected: dedicated or open robots for coated tubes and wells, systems nearly similar to chemistry analysers in the case of magnetic particles, and a completely original design for those using porous and film materials.

  6. Automated Test-Form Generation

    Science.gov (United States)

    van der Linden, Wim J.; Diao, Qi

    2011-01-01

    In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…

  7. Translation: Aids, Robots, and Automation.

    Science.gov (United States)

    Andreyewsky, Alexander

    1981-01-01

    Examines electronic aids to translation both as ways to automate it and as an approach to solve problems resulting from shortage of qualified translators. Describes the limitations of robotic MT (Machine Translation) systems, viewing MAT (Machine-Aided Translation) as the only practical solution and the best vehicle for further automation. (MES)

  8. Opening up Library Automation Software

    Science.gov (United States)

    Breeding, Marshall

    2009-01-01

    Throughout the history of library automation, the author has seen a steady advancement toward more open systems. In the early days of library automation, when proprietary systems dominated, the need for standards was paramount since other means of inter-operability and data exchange weren't possible. Today's focus on Application Programming…

  9. Interactive Parallel and Distributed Processing

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Pagliarini, Luigi

    2010-01-01

    We present the concept of interactive parallel and distributed processing, and the challenges that programmers face in designing interactive parallel and distributed systems. Specifically, we introduce the challenges that are met and the decisions that need to be taken with respect...... to implement interactive parallel and distributed processing with different software behavioural models such as open loop, randomness based, rule based, user interaction based, AI and ALife based software....

  10. Patterns for Parallel Software Design

    CERN Document Server

    Ortega-Arjona, Jorge Luis

    2010-01-01

    Essential reading to understand patterns for parallel programming Software patterns have revolutionized the way we think about how software is designed, built, and documented, and the design of parallel software requires you to consider other particular design aspects and special skills. From clusters to supercomputers, success heavily depends on the design skills of software developers. Patterns for Parallel Software Design presents a pattern-oriented software architecture approach to parallel software design. This approach is not a design method in the classic sense, but a new way of managin

  11. Parallel Programming in the Age of Ubiquitous Parallelism

    Science.gov (United States)

    Pingali, Keshav

    2014-04-01

    Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

  12. Trajectories in parallel optics.

    Science.gov (United States)

    Klapp, Iftach; Sochen, Nir; Mendlovic, David

    2011-10-01

    In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526.

  13. Parallel Algorithms for Normalization

    CERN Document Server

    Boehm, Janko; Laplagne, Santiago; Pfister, Gerhard; Steenpass, Andreas; Steidel, Stefan

    2011-01-01

    Given a reduced affine algebra A over a perfect field K, we present parallel algorithms to compute the normalization \\bar{A} of A. Our starting point is the algorithm of Greuel, Laplagne, and Seelisch, which is an improvement of de Jong's algorithm. First, we propose to stratify the singular locus Sing(A) in a way which is compatible with normalization, apply a local version of the normalization algorithm at each stratum, and find \\bar{A} by putting the local results together. Second, in the case where K = Q is the field of rationals, we propose modular versions of the global and local algorithms. We have implemented our algorithms in the computer algebra system SINGULAR and compare their performance with that of other algorithms. In the case where K = Q, we also discuss the use of modular computations of Groebner bases, radicals and primary decompositions. We point out that in most examples, the new algorithms outperform the algorithm of Greuel, Laplagne, and Seelisch by far, even if we do not run them in pa...

  14. Automated Motivic Analysis

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2016-01-01

    Motivic analysis provides very detailed understanding of musical composi- tions, but is also particularly difficult to formalize and systematize. A computational automation of the discovery of motivic patterns cannot be reduced to a mere extraction of all possible sequences of descriptions....... The systematic approach inexorably leads to a proliferation of redundant structures that needs to be addressed properly. Global filtering techniques cause a drastic elimination of interesting structures that damages the quality of the analysis. On the other hand, a selection of closed patterns allows...... for lossless compression. The structural complexity resulting from successive repetitions of patterns can be controlled through a simple modelling of cycles. Generally, motivic patterns cannot always be defined solely as sequences of descriptions in a fixed set of dimensions: throughout the descriptions...

  15. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  16. Automated Electrostatics Environmental Chamber

    Science.gov (United States)

    Calle, Carlos; Lewis, Dean C.; Buchanan, Randy K.; Buchanan, Aubri

    2005-01-01

    The Mars Electrostatics Chamber (MEC) is an environmental chamber designed primarily to create atmospheric conditions like those at the surface of Mars to support experiments on electrostatic effects in the Martian environment. The chamber is equipped with a vacuum system, a cryogenic cooling system, an atmospheric-gas replenishing and analysis system, and a computerized control system that can be programmed by the user and that provides both automation and options for manual control. The control system can be set to maintain steady Mars-like conditions or to impose temperature and pressure variations of a Mars diurnal cycle at any given season and latitude. In addition, the MEC can be used in other areas of research because it can create steady or varying atmospheric conditions anywhere within the wide temperature, pressure, and composition ranges between the extremes of Mars-like and Earth-like conditions.

  17. Automated Standard Hazard Tool

    Science.gov (United States)

    Stebler, Shane

    2014-01-01

    The current system used to generate standard hazard reports is considered cumbersome and iterative. This study defines a structure for this system's process in a clear, algorithmic way so that standard hazard reports and basic hazard analysis may be completed using a centralized, web-based computer application. To accomplish this task, a test server is used to host a prototype of the tool during development. The prototype is configured to easily integrate into NASA's current server systems with minimal alteration. Additionally, the tool is easily updated and provides NASA with a system that may grow to accommodate future requirements and possibly, different applications. Results of this project's success are outlined in positive, subjective reviews complete by payload providers and NASA Safety and Mission Assurance personnel. Ideally, this prototype will increase interest in the concept of standard hazard automation and lead to the full-scale production of a user-ready application.

  18. Automated synthetic scene generation

    Science.gov (United States)

    Givens, Ryan N.

    Physics-based simulations generate synthetic imagery to help organizations anticipate system performance of proposed remote sensing systems. However, manually constructing synthetic scenes which are sophisticated enough to capture the complexity of real-world sites can take days to months depending on the size of the site and desired fidelity of the scene. This research, sponsored by the Air Force Research Laboratory's Sensors Directorate, successfully developed an automated approach to fuse high-resolution RGB imagery, lidar data, and hyperspectral imagery and then extract the necessary scene components. The method greatly reduces the time and money required to generate realistic synthetic scenes and developed new approaches to improve material identification using information from all three of the input datasets.

  19. Automated Essay Scoring

    Directory of Open Access Journals (Sweden)

    Semire DIKLI

    2006-01-01

    Full Text Available Automated Essay Scoring Semire DIKLI Florida State University Tallahassee, FL, USA ABSTRACT The impacts of computers on writing have been widely studied for three decades. Even basic computers functions, i.e. word processing, have been of great assistance to writers in modifying their essays. The research on Automated Essay Scoring (AES has revealed that computers have the capacity to function as a more effective cognitive tool (Attali, 2004. AES is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003. Revision and feedback are essential aspects of the writing process. Students need to receive feedback in order to increase their writing quality. However, responding to student papers can be a burden for teachers. Particularly if they have large number of students and if they assign frequent writing assignments, providing individual feedback to student essays might be quite time consuming. AES systems can be very useful because they can provide the student with a score as well as feedback within seconds (Page, 2003. Four types of AES systems, which are widely used by testing companies, universities, and public schools: Project Essay Grader (PEG, Intelligent Essay Assessor (IEA, E-rater, and IntelliMetric. AES is a developing technology. Many AES systems are used to overcome time, cost, and generalizability issues in writing assessment. The accuracy and reliability of these systems have been proven to be high. The search for excellence in machine scoring of essays is continuing and numerous studies are being conducted to improve the effectiveness of the AES systems.

  20. NEC和非堵塞式主从并行遗传算法应用于天线自动设计的研究%A Study of Applying NEC and Non-Blocking Master-Slave Parallel Genetic Algorithms to Automated Antenna Design

    Institute of Scientific and Technical Information of China (English)

    陈星; 黄卡玛; 赵翔

    2004-01-01

    利用优化算法和天线数值计算方法实现对天线结构的自动设计(Automated design)是现代天线研究的一个重要趋势.本文讨论了天线自动设计的原理和流程,采用遗传算法(Genetic algorithms)和NEC(Numerical electromagnetics code)天线数值计算程序,建立了一套天线自动设计软件平台.采用并行计算技术提高自动设计效率,搭建了一套Beowulf并行计算机系统,首次提出非堵塞式主从并行遗传算法的实现方案.以对锥削螺旋-圆锥喇叭天线的自动设计为例,结果表明该自动设计软件平台具备对复杂天线进行准确和有效设计的能力.16节点的并行效率达到了82.25%,超过同类研究结果.

  1. Parallel Backtracking with Answer Memoing for Independent And-Parallelism

    CERN Document Server

    de Guzmán, Pablo Chico; Carro, Manuel; Hermenegildo, Manuel V

    2011-01-01

    Goal-level Independent and-parallelism (IAP) is exploited by scheduling for simultaneous execution two or more goals which will not interfere with each other at run time. This can be done safely even if such goals can produce multiple answers. The most successful IAP implementations to date have used recomputation of answers and sequentially ordered backtracking. While in principle simplifying the implementation, recomputation can be very inefficient if the granularity of the parallel goals is large enough and they produce several answers, while sequentially ordered backtracking limits parallelism. And, despite the expected simplification, the implementation of the classic schemes has proved to involve complex engineering, with the consequent difficulty for system maintenance and extension, while still frequently running into the well-known trapped goal and garbage slot problems. This work presents an alternative parallel backtracking model for IAP and its implementation. The model features parallel out-of-or...

  2. Shifting Control Algorithm for a Single-Axle Parallel Plug-In Hybrid Electric Bus Equipped with EMT

    OpenAIRE

    Yunyun Yang; Sen Wu; Xiang Fu

    2014-01-01

    Combining the characteristics of motor with fast response speed, an electric-drive automated mechanical transmission (EMT) is proposed as a novel type of transmission in this paper. Replacing the friction synchronization shifting of automated manual transmission (AMT) in HEVs, the EMT can achieve active synchronization of speed shifting. The dynamic model of a single-axle parallel PHEV equipped with the EMT is built up, and the dynamic properties of the gearshift process are also described. I...

  3. High-throughput automated system for statistical biosensing employing microcantilevers arrays

    DEFF Research Database (Denmark)

    Bosco, Filippo; Chen, Ching H.; Hwu, En T.;

    2011-01-01

    In this paper we present a completely new and fully automated system for parallel microcantilever-based biosensing. Our platform is able to monitor simultaneously the change of resonance frequency (dynamic mode), of deflection (static mode), and of surface roughness of hundreds of cantilevers...

  4. Parallel performance of a preconditioned CG solver for unstructured finite element applications

    Energy Technology Data Exchange (ETDEWEB)

    Shadid, J.N.; Hutchinson, S.A.; Moffat, H.K. [Sandia National Labs., Albuquerque, NM (United States)

    1994-12-31

    A parallel unstructured finite element (FE) implementation designed for message passing MIMD machines is described. This implementation employs automated problem partitioning algorithms for load balancing unstructured grids, a distributed sparse matrix representation of the global finite element equations and a parallel conjugate gradient (CG) solver. In this paper a number of issues related to the efficient implementation of parallel unstructured mesh applications are presented. These include the differences between structured and unstructured mesh parallel applications, major communication kernels for unstructured CG solvers, automatic mesh partitioning algorithms, and the influence of mesh partitioning metrics on parallel performance. Initial results are presented for example finite element (FE) heat transfer analysis applications on a 1024 processor nCUBE 2 hypercube. Results indicate over 95% scaled efficiencies are obtained for some large problems despite the required unstructured data communication.

  5. Parallel Adaptive Mesh Refinement

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  6. Automating the radiographic NDT process

    International Nuclear Information System (INIS)

    Automation, the removal of the human element in inspection, has not been generally applied to film radiographic NDT. The justication for automating is not only productivity but also reliability of results. Film remains in the automated system of the future because of its extremely high image content, approximately 8 x 109 bits per 14 x 17. The equivalent to 2200 computer floppy discs. Parts handling systems and robotics applied for manufacturing and some NDT modalities, should now be applied to film radiographic NDT systems. Automatic film handling can be achieved with the daylight NDT film handling system. Automatic film processing is becoming the standard in industry and can be coupled to the daylight system. Robots offer the opportunity to automate fully the exposure step. Finally, computer aided interpretation appears on the horizon. A unit which laser scans a 14 x 17 (inch) film in 6 - 8 seconds can digitize film information for further manipulation and possible automatic interrogations (computer aided interpretation). The system called FDRS (for Film Digital Radiography System) is moving toward 50 micron (*approx* 16 lines/mm) resolution. This is believed to meet the need of the majority of image content needs. We expect the automated system to appear first in parts (modules) as certain operations are automated. The future will see it all come together in an automated film radiographic NDT system (author)

  7. Parallel Computers in Signal Processing

    Directory of Open Access Journals (Sweden)

    Narsingh Deo

    1985-07-01

    Full Text Available Signal processing often requires a great deal of raw computing power for which it is important to take a look at parallel computers. The paper reviews various types of parallel computer architectures from the viewpoint of signal and image processing.

  8. Parallelizing Monte Carlo with PMC

    Energy Technology Data Exchange (ETDEWEB)

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  9. PARALLEL IMPORT: REALITY FOR RUSSIA

    Directory of Open Access Journals (Sweden)

    Т. А. Сухопарова

    2014-01-01

    Full Text Available Problem of parallel import is urgent question at now. Parallel import legalization in Russia is expedient. Such statement based on opposite experts opinion analysis. At the same time it’s necessary to negative consequences consider of this decision and to apply remedies to its minimization.Purchase on Elibrary.ru > Buy now

  10. Parallel context-free languages

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The relation between the family of context-free languages and the family of parallel context-free languages is examined in this paper. It is proved that the families are incomparable. Finally we prove that the family of languages of finite index is contained in the family of parallel context...

  11. Parallel FFT using Eden Skeletons

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Lobachev, Oleg;

    2009-01-01

    approaches like calculating FFT using a parallel map-and-transpose skeleton provide more flexibility to overcome these problems. Assuming a distributed access to input data and re-organising computation to return results in a distributed way improves the parallel runtime behaviour....

  12. Parallel contingency statistics with Titan.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  13. Automated Fluid Interface System (AFIS)

    Science.gov (United States)

    1990-01-01

    Automated remote fluid servicing will be necessary for future space missions, as future satellites will be designed for on-orbit consumable replenishment. In order to develop an on-orbit remote servicing capability, a standard interface between a tanker and the receiving satellite is needed. The objective of the Automated Fluid Interface System (AFIS) program is to design, fabricate, and functionally demonstrate compliance with all design requirements for an automated fluid interface system. A description and documentation of the Fairchild AFIS design is provided.

  14. An Analysis of the Relationship of Confucian Thoughts to Chinese Traditional Culture with Western Culture Concerned

    Institute of Scientific and Technical Information of China (English)

    王宝

    2008-01-01

    By drawing support from the kernel theories of Confucian thoughts and the dominant characteristic of Chinese traditional culture,this article was to introduce how Confucian thoughts play a chief role in Chinese traditional culture.The study first analyzed the kernel theories of Confucian thoughts and the chief characteristic of Chinese traditional culture in two parallel lines.Then Chinese traditional culture was deliberated in culture patterns with Confucian thoughts and Western Culture concerned.

  15. Tools for the automation of large control systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit – SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting in real-time to changes in the system, thus providing for the automation of standard procedures and the for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  16. Tools for the Automation of Large Distributed Control Systems

    CERN Document Server

    Gaspar, Clara

    2005-01-01

    The new LHC experiments at CERN will have very large numbers of channels to operate. In order to be able to configure and monitor such large systems, a high degree of parallelism is necessary. The control system is built as a hierarchy of sub-systems distributed over several computers. A toolkit - SMI++, combining two approaches: finite state machines and rule-based programming, allows for the description of the various sub-systems as decentralized deciding entities, reacting is real-time to changes in the system, thus providing for the automation of standard procedures and for the automatic recovery from error conditions in a hierarchical fashion. In this paper we will describe the principles and features of SMI++ as well as its integration with an industrial SCADA tool for use by the LHC experiments and we will try to show that such tools, can provide a very convenient mechanism for the automation of large scale, high complexity, applications.

  17. Automation of servicibility of radio-relay station equipment

    Science.gov (United States)

    Uryev, A. G.; Mishkin, Y. I.; Itkis, G. Y.

    1985-03-01

    Automation of the serviceability of radio relay station equipment must ensure central gathering and primary processing of reliable instrument reading with subsequent display on the control panel, detection and recording of failures soon enough, advance enough warning based on analysis of detertioration symptoms, and correct remote measurement of equipment performance parameters. Such an inspection will minimize transmission losses while reducing nonproductive time and labor spent on documentation and measurement. A multichannel automated inspection system for this purpose should operate by a parallel rather than sequential procedure. Digital data processing is more expedient in this case than analog method and, therefore, analog to digital converters are required. Spepcial normal, above limit and below limit test signals provide means of self-inspection, to which must be added adequate interference immunization, stabilization, and standby power supply. Use of a microcomputer permits overall refinement and expansion of the inspection system while it minimizes though not completely eliminates dependence on subjective judgment.

  18. Towards Automated Design, Analysis and Optimization of Declarative Curation Workflows

    Directory of Open Access Journals (Sweden)

    Tianhong Song

    2014-10-01

    Full Text Available Data curation is increasingly important. Our previous work on a Kepler curation package has demonstrated advantages that come from automating data curation pipelines by using workflow systems. However, manually designed curation workflows can be error-prone and inefficient due to a lack of user understanding of the workflow system, misuse of actors, or human error. Correcting problematic workflows is often very time-consuming. A more proactive workflow system can help users avoid such pitfalls. For example, static analysis before execution can be used to detect the potential problems in a workflow and help the user to improve workflow design. In this paper, we propose a declarative workflow approach that supports semi-automated workflow design, analysis and optimization. We show how the workflow design engine helps users to construct data curation workflows, how the workflow analysis engine detects different design problems of workflows and how workflows can be optimized by exploiting parallelism.

  19. National Automated Conformity Inspection Process

    Data.gov (United States)

    Department of Transportation — The National Automated Conformity Inspection Process (NACIP) Application is intended to expedite the workflow process as it pertains to the FAA Form 81 0-10 Request...

  20. Evolution of Home Automation Technology

    Directory of Open Access Journals (Sweden)

    Mohd. Rihan

    2009-01-01

    Full Text Available In modern society home and office automation has becomeincreasingly important, providing ways to interconnectvarious home appliances. This interconnection results infaster transfer of information within home/offices leading tobetter home management and improved user experience.Home Automation, in essence, is a technology thatintegrates various electrical systems of a home to provideenhanced comfort and security. Users are grantedconvenient and complete control over all the electrical homeappliances and they are relieved from the tasks thatpreviously required manual control. This paper tracks thedevelopment of home automation technology over the lasttwo decades. Various home automation technologies havebeen explained briefly, giving a chronological account of theevolution of one of the most talked about technologies ofrecent times.

  1. Automating the Purple Crow Lidar

    Science.gov (United States)

    Hicks, Shannon; Sica, R. J.; Argall, P. S.

    2016-06-01

    The Purple Crow LiDAR (PCL) was built to measure short and long term coupling between the lower, middle, and upper atmosphere. The initial component of my MSc. project is to automate two key elements of the PCL: the rotating liquid mercury mirror and the Zaber alignment mirror. In addition to the automation of the Zaber alignment mirror, it is also necessary to describe the mirror's movement and positioning errors. Its properties will then be added into the alignment software. Once the alignment software has been completed, we will compare the new alignment method with the previous manual procedure. This is the first among several projects that will culminate in a fully-automated lidar. Eventually, we will be able to work remotely, thereby increasing the amount of data we collect. This paper will describe the motivation for automation, the methods we propose, preliminary results for the Zaber alignment error analysis, and future work.

  2. Home automation with Intel Galileo

    CERN Document Server

    Dundar, Onur

    2015-01-01

    This book is for anyone who wants to learn Intel Galileo for home automation and cross-platform software development. No knowledge of programming with Intel Galileo is assumed, but knowledge of the C programming language is essential.

  3. Towards automated traceability maintenance.

    Science.gov (United States)

    Mäder, Patrick; Gotel, Orlena

    2012-10-01

    Traceability relations support stakeholders in understanding the dependencies between artifacts created during the development of a software system and thus enable many development-related tasks. To ensure that the anticipated benefits of these tasks can be realized, it is necessary to have an up-to-date set of traceability relations between the established artifacts. This goal requires the creation of traceability relations during the initial development process. Furthermore, the goal also requires the maintenance of traceability relations over time as the software system evolves in order to prevent their decay. In this paper, an approach is discussed that supports the (semi-) automated update of traceability relations between requirements, analysis and design models of software systems expressed in the UML. This is made possible by analyzing change events that have been captured while working within a third-party UML modeling tool. Within the captured flow of events, development activities comprised of several events are recognized. These are matched with predefined rules that direct the update of impacted traceability relations. The overall approach is supported by a prototype tool and empirical results on the effectiveness of tool-supported traceability maintenance are provided. PMID:23471308

  4. Automated Gas Distribution System

    Science.gov (United States)

    Starke, Allen; Clark, Henry

    2012-10-01

    The cyclotron of Texas A&M University is one of the few and prized cyclotrons in the country. Behind the scenes of the cyclotron is a confusing, and dangerous setup of the ion sources that supplies the cyclotron with particles for acceleration. To use this machine there is a time consuming, and even wasteful step by step process of switching gases, purging, and other important features that must be done manually to keep the system functioning properly, while also trying to maintain the safety of the working environment. Developing a new gas distribution system to the ion source prevents many of the problems generated by the older manually setup process. This developed system can be controlled manually in an easier fashion than before, but like most of the technology and machines in the cyclotron now, is mainly operated based on software programming developed through graphical coding environment Labview. The automated gas distribution system provides multi-ports for a selection of different gases to decrease the amount of gas wasted through switching gases, and a port for the vacuum to decrease the amount of time spent purging the manifold. The Labview software makes the operation of the cyclotron and ion sources easier, and safer for anyone to use.

  5. Practical Parallel External Memory Algorithms via Simulation of Parallel Algorithms

    CERN Document Server

    Robillard, David E

    2010-01-01

    This thesis introduces PEMS2, an improvement to PEMS (Parallel External Memory System). PEMS executes Bulk-Synchronous Parallel (BSP) algorithms in an External Memory (EM) context, enabling computation with very large data sets which exceed the size of main memory. Many parallel algorithms have been designed and implemented for Bulk-Synchronous Parallel models of computation. Such algorithms generally assume that the entire data set is stored in main memory at once. PEMS overcomes this limitation without requiring any modification to the algorithm by using disk space as memory for additional "virtual processors". Previous work has shown this to be a promising approach which scales well as computational resources (i.e. processors and disks) are added. However, the technique incurs significant overhead when compared with purpose-built EM algorithms. PEMS2 introduces refinements to the simulation process intended to reduce this overhead as well as the amount of disk space required to run the simulation. New func...

  6. Parallel programming characteristics of a DSP-based parallel system

    Institute of Scientific and Technical Information of China (English)

    GAO Shu; GUO Qing-ping

    2006-01-01

    This paper firstly introduces the structure and working principle of DSP-based parallel system, parallel accelerating board and SHARC DSP chip. Then it pays attention to investigating the system's programming characteristics, especially the mode of communication, discussing how to design parallel algorithms and presenting a domain-decomposition-based complete multi-grid parallel algorithm with virtual boundary forecast (VBF) to solve a lot of large-scale and complicated heat problems. In the end, Mandelbrot Set and a non-linear heat transfer equation of ceramic/metal composite material are taken as examples to illustrate the implementation of the proposed algorithm. The results showed that the solutions are highly efficient and have linear speedup.

  7. Aprendizaje automático

    OpenAIRE

    Moreno, Antonio

    2006-01-01

    En este libro se introducen los conceptos básicos en una de las ramas más estudiadas actualmente dentro de la inteligencia artificial: el aprendizaje automático. Se estudian temas como el aprendizaje inductivo, el razonamiento analógico, el aprendizaje basado en explicaciones, las redes neuronales, los algoritmos genéticos, el razonamiento basado en casos o las aproximaciones teóricas al aprendizaje automático.

  8. 2015 Chinese Intelligent Automation Conference

    CERN Document Server

    Li, Hongbo

    2015-01-01

    Proceedings of the 2015 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’15, held in Fuzhou, China. The topics include adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, reconfigurable control, etc. Engineers and researchers from academia, industry and the government can gain valuable insights into interdisciplinary solutions in the field of intelligent automation.

  9. Technology modernization assessment flexible automation

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, D.W.; Boyd, D.R.; Hansen, N.H.; Hansen, M.A.; Yount, J.A.

    1990-12-01

    The objectives of this report are: to present technology assessment guidelines to be considered in conjunction with defense regulations before an automation project is developed to give examples showing how assessment guidelines may be applied to a current project to present several potential areas where automation might be applied successfully in the depot system. Depots perform primarily repair and remanufacturing operations, with limited small batch manufacturing runs. While certain activities (such as Management Information Systems and warehousing) are directly applicable to either environment, the majority of applications will require combining existing and emerging technologies in different ways, with the special needs of depot remanufacturing environment. Industry generally enjoys the ability to make revisions to its product lines seasonally, followed by batch runs of thousands or more. Depot batch runs are in the tens, at best the hundreds, of parts with a potential for large variation in product mix; reconfiguration may be required on a week-to-week basis. This need for a higher degree of flexibility suggests a higher level of operator interaction, and, in turn, control systems that go beyond the state of the art for less flexible automation and industry in general. This report investigates the benefits and barriers to automation and concludes that, while significant benefits do exist for automation, depots must be prepared to carefully investigate the technical feasibility of each opportunity and the life-cycle costs associated with implementation. Implementation is suggested in two ways: (1) develop an implementation plan for automation technologies based on results of small demonstration automation projects; (2) use phased implementation for both these and later stage automation projects to allow major technical and administrative risk issues to be addressed. 10 refs., 2 figs., 2 tabs. (JF)

  10. Parallel Backtracking with Answer Memoing for Independent And-Parallelism

    OpenAIRE

    Chico de Guzmán, Pablo; Casas, Amadeo; Carro Liñares, Manuel; Hermenegildo, Manuel V.

    2011-01-01

    Goal-level Independent and-parallelism (IAP) is exploited by scheduling for simultaneous execution two or more goals which will not interfere with each other at run time. This can be done safely even if such goals can produce múltiple answers. The most successful IAP implementations to date have used recomputation of answers and sequentially ordered backtracking. While in principie simplifying the implementation, recomputation can be very inefficient if the granularity of the parallel goal...

  11. Software for parallel processing applications

    International Nuclear Information System (INIS)

    Parallel computing has been used to solve large computing problems in high-energy physics. Typical problems include offline event reconstruction, monte carlo event-generation and reconstruction, and lattice QCD calculations. Fermilab has extensive experience in parallel computing using CPS (cooperative processes software) and networked UNIX workstations for the loosely-coupled problems of event reconstruction and monte carlo generation and CANOPY and ACPMAPS for Lattice QCD. Both systems will be discussed. Parallel software has been developed by many other groups, both commercial and research-oriented. Examples include PVM, Express and network-Linda for workstation clusters and PCN and STRAND88 for more tightly-coupled machines

  12. Parallel Triangles Counting Using Pipelining

    OpenAIRE

    Aráoz, Julián; Zoltan, Cristina

    2015-01-01

    The generalized method to have a parallel solution to a computational problem, is to find a way to use Divide & Conquer paradigm in order to have processors acting on its own data and therefore all can be scheduled in parallel. MapReduce is an example of this approach: Input data is transformed by the mappers, in order to feed the reducers that can run in parallel. In general this schema gives efficient problem solutions, but it stops being true when the replication factor grows. We present a...

  13. Parallel Architecture For Robotics Computation

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  14. Interpreting the Data: Parallel Analysis with Sawzall

    Directory of Open Access Journals (Sweden)

    Rob Pike

    2005-01-01

    Full Text Available Very large data sets often have a flat but regular structure and span multiple disks and machines. Examples include telephone call records, network logs, and web document repositories. These large data sets are not amenable to study using traditional database techniques, if only because they can be too large to fit in a single relational database. On the other hand, many of the analyses done on them can be expressed using simple, easily distributed computations: filtering, aggregation, extraction of statistics, and so on. We present a system for automating such analyses. A filtering phase, in which a query is expressed using a new procedural programming language, emits data to an aggregation phase. Both phases are distributed over hundreds or even thousands of computers. The results are then collated and saved to a file. The design – including the separation into two phases, the form of the programming language, and the properties of the aggregators – exploits the parallelism inherent in having data and computation distributed across many machines.

  15. Sub-Second Parallel State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rice, Mark J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Glaesemann, Kurt R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wang, Shaobu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Huang, Zhenyu [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-10-31

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects

  16. Using Natural Language Processing to Improve Accuracy of Automated Notifiable Disease Reporting

    OpenAIRE

    Friedlin, Jeff; Grannis, Shaun; Overhage, J Marc

    2008-01-01

    We examined whether using a natural language processing (NLP) system results in improved accuracy and completeness of automated electronic laboratory reporting (ELR) of notifiable conditions. We used data from a community-wide health information exchange that has automated ELR functionality. We focused on methicillin-resistant Staphylococcus Aureus (MRSA), a reportable infection found in unstructured, free-text culture result reports. We used the Regenstrief EXtraction tool (REX) for this wor...

  17. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  18. Metal structures with parallel pores

    Science.gov (United States)

    Sherfey, J. M.

    1976-01-01

    Four methods of fabricating metal plates having uniformly sized parallel pores are studied: elongate bundle, wind and sinter, extrude and sinter, and corrugate stack. Such plates are suitable for electrodes for electrochemical and fuel cells.

  19. "Feeling" Series and Parallel Resistances.

    Science.gov (United States)

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  20. Demonstrating Forces between Parallel Wires.

    Science.gov (United States)

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  1. Some parallel banded system solvers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sameh, A.H.

    1984-12-01

    This paper describes algorithms for solving narrow banded systems and the Helmholtz difference equations that are suitable for multiprocessing systems. The organization of the algorithms highlight the large grain parallelism inherent in the problems. 13 references, 5 tables.

  2. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  3. AUTOMATED ANALYSIS OF BREAKERS

    Directory of Open Access Journals (Sweden)

    E. M. Farhadzade

    2014-01-01

    Full Text Available Breakers relate to Electric Power Systems’ equipment, the reliability of which influence, to a great extend, on reliability of Power Plants. In particular, the breakers determine structural reliability of switchgear circuit of Power Stations and network substations. Failure in short-circuit switching off by breaker with further failure of reservation unit or system of long-distance protection lead quite often to system emergency.The problem of breakers’ reliability improvement and the reduction of maintenance expenses is becoming ever more urgent in conditions of systematic increasing of maintenance cost and repair expenses of oil circuit and air-break circuit breakers. The main direction of this problem solution is the improvement of diagnostic control methods and organization of on-condition maintenance. But this demands to use a great amount of statistic information about nameplate data of breakers and their operating conditions, about their failures, testing and repairing, advanced developments (software of computer technologies and specific automated information system (AIS.The new AIS with AISV logo was developed at the department: “Reliability of power equipment” of AzRDSI of Energy. The main features of AISV are:· to provide the security and data base accuracy;· to carry out systematic control of breakers conformity with operating conditions;· to make the estimation of individual  reliability’s value and characteristics of its changing for given combination of characteristics variety;· to provide personnel, who is responsible for technical maintenance of breakers, not only with information but also with methodological support, including recommendations for the given problem solving  and advanced methods for its realization.

  4. Tactile Displays with Parallel Mechanism

    OpenAIRE

    Kyung, Ki-Uk; Kwon, Dong-Soo

    2008-01-01

    This chapter deals with tactile displays and their mechanisms. We briefly reviewed research history of mechanical type tactile displays and their parallel arrangement. And this chapter mainly describes two systems including tactile displays. The 5x6 pin arrayed tactile display with parallel arrangement of piezoelectric bimorphs has been described in the section 3. The tactile display has been embedded into a mouse device and the performance of the device has been verified from pattern display...

  5. HEATR project: ATR algorithm parallelization

    Science.gov (United States)

    Deardorf, Catherine E.

    1998-09-01

    High Performance Computing (HPC) Embedded Application for Target Recognition (HEATR) is a project funded by the High Performance Computing Modernization Office through the Common HPC Software Support Initiative (CHSSI). The goal of CHSSI is to produce portable, parallel, multi-purpose, freely distributable, support software to exploit emerging parallel computing technologies and enable application of scalable HPC's for various critical DoD applications. Specifically, the CHSSI goal for HEATR is to provide portable, parallel versions of several existing ATR detection and classification algorithms to the ATR-user community to achieve near real-time capability. The HEATR project will create parallel versions of existing automatic target recognition (ATR) detection and classification algorithms and generate reusable code that will support porting and software development process for ATR HPC software. The HEATR Team has selected detection/classification algorithms from both the model- based and training-based (template-based) arena in order to consider the parallelization requirements for detection/classification algorithms across ATR technology. This would allow the Team to assess the impact that parallelization would have on detection/classification performance across ATR technology. A field demo is included in this project. Finally, any parallel tools produced to support the project will be refined and returned to the ATR user community along with the parallel ATR algorithms. This paper will review: (1) HPCMP structure as it relates to HEATR, (2) Overall structure of the HEATR project, (3) Preliminary results for the first algorithm Alpha Test, (4) CHSSI requirements for HEATR, and (5) Project management issues and lessons learned.

  6. Teaching Parallel Programming Using Java

    OpenAIRE

    Shafi, Aamir; Akhtar, Aleem; Javed, Ansar; Carpenter, Bryan

    2014-01-01

    This paper presents an overview of the "Applied Parallel Computing" course taught to final year Software Engineering undergraduate students in Spring 2014 at NUST, Pakistan. The main objective of the course was to introduce practical parallel programming tools and techniques for shared and distributed memory concurrent systems. A unique aspect of the course was that Java was used as the principle programming language. The course was divided into three sections. The first section covered paral...

  7. Explicit Parallel Programming: System Description

    OpenAIRE

    Gamble, Jim; Ribbens, Calvin J.

    1991-01-01

    The implementation of the Explicit Parallel Programming (EPP) system is described. EPP is a prototype implementation of a language for writing parallel programs for shared memory multiprocessors. EPP may be viewed as a coordination language, since it is used to define the sequencing or ordering of various tasks, while the tasks themselves are defined in some other compilable language. The two main components of the EPP system---a compiler and an executive---are described in this report. An...

  8. Explicit Parallel Programming: User's Guide

    OpenAIRE

    Gamble, Jim; Ribbens, Calvin J.

    1991-01-01

    The Explicit Parallel Programming (EPP) language is defined and illustrated with several examples. EPP is a prototype implementation of a language for writing parallel programs for shared memory multiprocessors. EPP may be viewed as a coordination language, since it is used to define the sequencing or ordering of various tasks, while the tasks themselves are defined in some other compilable language. The prototype described here requires FORTRAN as the base language, but there is no inheren...

  9. Method and automated apparatus for detecting coliform organisms

    Science.gov (United States)

    Dill, W. P.; Taylor, R. E.; Jeffers, E. L. (Inventor)

    1980-01-01

    Method and automated apparatus are disclosed for determining the time of detection of metabolically produced hydrogen by coliform bacteria cultured in an electroanalytical cell from the time the cell is inoculated with the bacteria. The detection time data provides bacteria concentration values. The apparatus is sequenced and controlled by a digital computer to discharge a spent sample, clean and sterilize the culture cell, provide a bacteria nutrient into the cell, control the temperature of the nutrient, inoculate the nutrient with a bacteria sample, measures the electrical potential difference produced by the cell, and measures the time of detection from inoculation.

  10. Geoparsing the Cultural Heritage Metadata

    Directory of Open Access Journals (Sweden)

    Vlasta Vodeb

    2013-09-01

    Full Text Available ABSTRACTGeoparsing is a method for automatic extraction of geographic coordinates on the basis of unstructured text and content. The method is also usable for metadata of immovable, movable and intangible cultural heritage especially when geographic coordinates were not part of metadata description in documentation system. In this paper the usability and efficiency of online geoparsing services are presented, with the emphasis on e-service Europeana Geoparser. The analysis of spatial accuracy of the Europeana Geoparser is presented with the analysis of the Slovenian cultural heritage objects. At the end an evaluation of usability of the method for automated extraction of geographic coordinates for cultural heritage is given and debated.

  11. A Droplet Microfluidic Platform for Automating Genetic Engineering.

    Science.gov (United States)

    Gach, Philip C; Shih, Steve C C; Sustarich, Jess; Keasling, Jay D; Hillson, Nathan J; Adams, Paul D; Singh, Anup K

    2016-05-20

    We present a water-in-oil droplet microfluidic platform for transformation, culture and expression of recombinant proteins in multiple host organisms including bacteria, yeast and fungi. The platform consists of a hybrid digital microfluidic/channel-based droplet chip with integrated temperature control to allow complete automation and integration of plasmid addition, heat-shock transformation, addition of selection medium, culture, and protein expression. The microfluidic format permitted significant reduction in consumption (100-fold) of expensive reagents such as DNA and enzymes compared to the benchtop method. The chip contains a channel to continuously replenish oil to the culture chamber to provide a fresh supply of oxygen to the cells for long-term (∼5 days) cell culture. The flow channel also replenished oil lost to evaporation and increased the number of droplets that could be processed and cultured. The platform was validated by transforming several plasmids into Escherichia coli including plasmids containing genes for fluorescent proteins GFP, BFP and RFP; plasmids with selectable markers for ampicillin or kanamycin resistance; and a Golden Gate DNA assembly reaction. We also demonstrate the applicability of this platform for transformation in widely used eukaryotic organisms such as Saccharomyces cerevisiae and Aspergillus niger. Duration and temperatures of the microfluidic heat-shock procedures were optimized to yield transformation efficiencies comparable to those obtained by benchtop methods with a throughput up to 6 droplets/min. The proposed platform offers potential for automation of molecular biology experiments significantly reducing cost, time and variability while improving throughput. PMID:26830031

  12. A Droplet Microfluidic Platform for Automating Genetic Engineering.

    Science.gov (United States)

    Gach, Philip C; Shih, Steve C C; Sustarich, Jess; Keasling, Jay D; Hillson, Nathan J; Adams, Paul D; Singh, Anup K

    2016-05-20

    We present a water-in-oil droplet microfluidic platform for transformation, culture and expression of recombinant proteins in multiple host organisms including bacteria, yeast and fungi. The platform consists of a hybrid digital microfluidic/channel-based droplet chip with integrated temperature control to allow complete automation and integration of plasmid addition, heat-shock transformation, addition of selection medium, culture, and protein expression. The microfluidic format permitted significant reduction in consumption (100-fold) of expensive reagents such as DNA and enzymes compared to the benchtop method. The chip contains a channel to continuously replenish oil to the culture chamber to provide a fresh supply of oxygen to the cells for long-term (∼5 days) cell culture. The flow channel also replenished oil lost to evaporation and increased the number of droplets that could be processed and cultured. The platform was validated by transforming several plasmids into Escherichia coli including plasmids containing genes for fluorescent proteins GFP, BFP and RFP; plasmids with selectable markers for ampicillin or kanamycin resistance; and a Golden Gate DNA assembly reaction. We also demonstrate the applicability of this platform for transformation in widely used eukaryotic organisms such as Saccharomyces cerevisiae and Aspergillus niger. Duration and temperatures of the microfluidic heat-shock procedures were optimized to yield transformation efficiencies comparable to those obtained by benchtop methods with a throughput up to 6 droplets/min. The proposed platform offers potential for automation of molecular biology experiments significantly reducing cost, time and variability while improving throughput.

  13. Automated security response robot

    Science.gov (United States)

    Ciccimaro, Dominic A.; Everett, Hobart R.; Gilbreath, Gary A.; Tran, Tien T.

    1999-01-01

    ROBART III is intended as an advance demonstration platform for non-lethal response measures, extending the concepts of reflexive teleoperation into the realm of coordinated weapons control in law enforcement and urban warfare scenarios. A rich mix of ultrasonic and optical proximity and range sensors facilitates remote operation in unstructured and unexplored buildings with minimal operator supervision. Autonomous navigation and mapping of interior spaces is significantly enhanced by an innovative algorithm which exploits the fact that the majority of man-made structures are characterized by parallel and orthogonal walls. Extremely robust intruder detection and assessment capabilities are achieved through intelligent fusion of a multitude of inputs form various onboard motion sensors. Intruder detection is addressed by a 360-degree staring array of passive-IR motion detectors, augmented by a number of positionable head-mounted sensors. Automatic camera tracking of a moving target is accomplished using a video line digitizer. Non-lethal response systems include a six- barrelled pneumatically-powered Gatling gun, high-powered strobe lights, and three ear-piercing 103-decibel sirens.

  14. Programmable automation systems in PSA

    International Nuclear Information System (INIS)

    The Finnish safety authority (STUK) requires plant specific PSAs, and quantitative safety goals are set on different levels. The reliability analysis is more problematic when critical safety functions are realized by applying programmable automation systems. Conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. However, it is important to analyze contribution of programmable automation systems to the plant safety and PSA is the only method with system analytical view over the safety. This report discusses the applicability of PSA methodology (fault tree analyses, failure modes and effects analyses) in the analysis of programmable automation systems. The problem of how to decompose programmable automation systems for reliability modeling purposes is discussed. In addition to the qualitative analysis and structural reliability modeling issues, the possibility to evaluate failure probabilities of programmable automation systems is considered. One solution to the quantification issue is the use of expert judgements, and the principles to apply expert judgements is discussed in the paper. A framework to apply expert judgements is outlined. Further, the impacts of subjective estimates on the interpretation of PSA results are discussed. (orig.) (13 refs.)

  15. Synchronizing Parallel Tasks Using STM

    Directory of Open Access Journals (Sweden)

    Ryan Saptarshi Ray

    2015-03-01

    Full Text Available The past few years have marked the start of a historic transition from sequential to parallel computation. The necessity to write parallel programs is increasing as systems are getting more complex while processor speed increases are slowing down. Current parallel programming uses low-level programming constructs like threads and explicit synchronization using locks to coordinate thread execution. Parallel programs written with these constructs are difficult to design, program and debug. Also locks have many drawbacks which make them a suboptimal solution. One such drawback is that locks should be only used to enclose the critical section of the parallel-processing code. If locks are used to enclose the entire code then the performance of the code drastically decreases. Software Transactional Memory (STM is a promising new approach to programming shared-memory parallel processors. It is a concurrency control mechanism that is widely considered to be easier to use by programmers than locking. It allows portions of a program to execute in isolation, without regard to other, concurrently executing tasks. A programmer can reason about the correctness of code within a transaction and need not worry about complex interactions with other, concurrently executing parts of the program. If STM is used to enclose the entire code then the performance of the code is the same as that of the code in which STM is used to enclose the critical section only and is far better than code in which locks have been used to enclose the entire code. So STM is easier to use than locks as critical section does not need to be identified in case of STM. This paper shows the concept of writing code using Software Transactional Memory (STM and the performance comparison of codes using locks with those using STM. It also shows why the use of STM in parallel-processing code is better than the use of locks.

  16. International Conference Automation : Challenges in Automation, Robotics and Measurement Techniques

    CERN Document Server

    Zieliński, Cezary; Kaliczyńska, Małgorzata

    2016-01-01

    This book presents the set of papers accepted for presentation at the International Conference Automation, held in Warsaw, 2-4 March of 2016. It presents the research results presented by top experts in the fields of industrial automation, control, robotics and measurement techniques. Each chapter presents a thorough analysis of a specific technical problem which is usually followed by numerical analysis, simulation, and description of results of implementation of the solution of a real world problem. The presented theoretical results, practical solutions and guidelines will be valuable for both researchers working in the area of engineering sciences and for practitioners solving industrial problems. .

  17. Pattern-Driven Automatic Parallelization

    Directory of Open Access Journals (Sweden)

    Christoph W. Kessler

    1996-01-01

    Full Text Available This article describes a knowledge-based system for automatic parallelization of a wide class of sequential numerical codes operating on vectors and dense matrices, and for execution on distributed memory message-passing multiprocessors. Its main feature is a fast and powerful pattern recognition tool that locally identifies frequently occurring computations and programming concepts in the source code. This tool also works for dusty deck codes that have been "encrypted" by former machine-specific code transformations. Successful pattern recognition guides sophisticated code transformations including local algorithm replacement such that the parallelized code need not emerge from the sequential program structure by just parallelizing the loops. It allows access to an expert's knowledge on useful parallel algorithms, available machine-specific library routines, and powerful program transformations. The partially restored program semantics also supports local array alignment, distribution, and redistribution, and allows for faster and more exact prediction of the performance of the parallelized target code than is usually possible.

  18. Parallel Density-Based Clustering for Discovery of Ionospheric Phenomena

    Science.gov (United States)

    Pankratius, V.; Gowanlock, M.; Blair, D. M.

    2015-12-01

    Ionospheric total electron content maps derived from global networks of dual-frequency GPS receivers can reveal a plethora of ionospheric features in real-time and are key to space weather studies and natural hazard monitoring. However, growing data volumes from expanding sensor networks are making manual exploratory studies challenging. As the community is heading towards Big Data ionospheric science, automation and Computer-Aided Discovery become indispensable tools for scientists. One problem of machine learning methods is that they require domain-specific adaptations in order to be effective and useful for scientists. Addressing this problem, our Computer-Aided Discovery approach allows scientists to express various physical models as well as perturbation ranges for parameters. The search space is explored through an automated system and parallel processing of batched workloads, which finds corresponding matches and similarities in empirical data. We discuss density-based clustering as a particular method we employ in this process. Specifically, we adapt Density-Based Spatial Clustering of Applications with Noise (DBSCAN). This algorithm groups geospatial data points based on density. Clusters of points can be of arbitrary shape, and the number of clusters is not predetermined by the algorithm; only two input parameters need to be specified: (1) a distance threshold, (2) a minimum number of points within that threshold. We discuss an implementation of DBSCAN for batched workloads that is amenable to parallelization on manycore architectures such as Intel's Xeon Phi accelerator with 60+ general-purpose cores. This manycore parallelization can cluster large volumes of ionospheric total electronic content data quickly. Potential applications for cluster detection include the visualization, tracing, and examination of traveling ionospheric disturbances or other propagating phenomena. Acknowledgments. We acknowledge support from NSF ACI-1442997 (PI V. Pankratius).

  19. Automated power management and control

    Science.gov (United States)

    Dolce, James L.

    1991-01-01

    A comprehensive automation design is being developed for Space Station Freedom's electric power system. A joint effort between NASA's Office of Aeronautics and Exploration Technology and NASA's Office of Space Station Freedom, it strives to increase station productivity by applying expert systems and conventional algorithms to automate power system operation. The initial station operation will use ground-based dispatches to perform the necessary command and control tasks. These tasks constitute planning and decision-making activities that strive to eliminate unplanned outages. We perceive an opportunity to help these dispatchers make fast and consistent on-line decisions by automating three key tasks: failure detection and diagnosis, resource scheduling, and security analysis. Expert systems will be used for the diagnostics and for the security analysis; conventional algorithms will be used for the resource scheduling.

  20. Unmet needs in automated cytogenetics

    International Nuclear Information System (INIS)

    Though some, at least, of the goals of automation systems for analysis of clinical cytogenetic material seem either at hand, like automatic metaphase finding, or at least likely to be met in the near future, like operator-assisted semi-automatic analysis of banded metaphase spreads, important areas of cytogenetic analsis, most importantly the determination of chromosomal aberration frequencies in populations of cells or in samples of cells from people exposed to environmental mutagens, await practical methods of automation. Important as are the clinical diagnostic applications, it is apparent that increasing concern over the clastogenic effects of the multitude of potentially clastogenic chemical and physical agents to which human populations are being increasingly exposed, and the resulting emergence of extensive cytogenetic testing protocols, makes the development of automation not only economically feasible but almost mandatory. The nature of the problems involved, and acutal of possible approaches to their solution, are discussed

  1. Manual versus automated blood sampling

    DEFF Research Database (Denmark)

    Teilmann, A C; Kalliokoski, Otto; Sørensen, Dorte B;

    2014-01-01

    corticosterone metabolites, and expressed more anxious behavior than did the mice of the other groups. Plasma corticosterone levels of mice subjected to tail blood sampling were also elevated, although less significantly. Mice subjected to automated blood sampling were less affected with regard to the parameters......Facial vein (cheek blood) and caudal vein (tail blood) phlebotomy are two commonly used techniques for obtaining blood samples from laboratory mice, while automated blood sampling through a permanent catheter is a relatively new technique in mice. The present study compared physiological parameters......, glucocorticoid dynamics as well as the behavior of mice sampled repeatedly for 24 h by cheek blood, tail blood or automated blood sampling from the carotid artery. Mice subjected to cheek blood sampling lost significantly more body weight, had elevated levels of plasma corticosterone, excreted more fecal...

  2. Network based automation for SMEs

    DEFF Research Database (Denmark)

    Shahabeddini Parizi, Mohammad; Radziwon, Agnieszka

    2016-01-01

    The implementation of appropriate automation concepts which increase productivity in Small and Medium Sized Enterprises (SMEs) requires a lot of effort, due to their limited resources. Therefore, it is strongly recommended for small firms to open up for the external sources of knowledge, which co......, this paper develops and discusses a set of guidelines for systematic productivity improvement within an innovative collaboration in regards to automation processes in SMEs.......The implementation of appropriate automation concepts which increase productivity in Small and Medium Sized Enterprises (SMEs) requires a lot of effort, due to their limited resources. Therefore, it is strongly recommended for small firms to open up for the external sources of knowledge, which...... could be obtained through network interaction. Based on two extreme cases of SMEs representing low-tech industry and an in-depth analysis of their manufacturing facilities this paper presents how collaboration between firms embedded in a regional ecosystem could result in implementation of new...

  3. Design automation for integrated circuits

    Science.gov (United States)

    Newell, S. B.; de Geus, A. J.; Rohrer, R. A.

    1983-04-01

    Consideration is given to the development status of the use of computers in automated integrated circuit design methods, which promise the minimization of both design time and design error incidence. Integrated circuit design encompasses two major tasks: error specification, in which the goal is a logic diagram that accurately represents the desired electronic function, and physical specification, in which the goal is an exact description of the physical locations of all circuit elements and their interconnections on the chip. Design automation not only saves money by reducing design and fabrication time, but also helps the community of systems and logic designers to work more innovatively. Attention is given to established design automation methodologies, programmable logic arrays, and design shortcuts.

  4. Automated Podcasting System for Universities

    Directory of Open Access Journals (Sweden)

    Ypatios Grigoriadis

    2013-03-01

    Full Text Available This paper presents the results achieved at Graz University of Technology (TU Graz in the field of automating the process of recording and publishing university lectures in a very new way. It outlines cornerstones of the development and integration of an automated recording system such as the lecture hall setup, the recording hardware and software architecture as well as the development of a text-based search for the final product by method of indexing video podcasts. Furthermore, the paper takes a look at didactical aspects, evaluations done in this context and future outlook.

  5. Agile Data: Automating database refactorings

    Directory of Open Access Journals (Sweden)

    Bruno Xavier

    2014-09-01

    Full Text Available This paper discusses an automated approach to database change management throughout the companies’ development workflow. By using automated tools, companies can avoid common issues related to manual database deployments. This work was motivated by analyzing usual problems within organizations, mostly originated from manual interventions that may result in systems disruptions and production incidents. In addition to practices of continuous integration and continuous delivery, the current paper describes a case study in which a suggested pipeline is implemented in order to reduce the deployment times and decrease incidents due to ineffective data controlling.

  6. Design automation, languages, and simulations

    CERN Document Server

    Chen, Wai-Kai

    2003-01-01

    As the complexity of electronic systems continues to increase, the micro-electronic industry depends upon automation and simulations to adapt quickly to market changes and new technologies. Compiled from chapters contributed to CRC's best-selling VLSI Handbook, this volume covers a broad range of topics relevant to design automation, languages, and simulations. These include a collaborative framework that coordinates distributed design activities through the Internet, an overview of the Verilog hardware description language and its use in a design environment, hardware/software co-design, syst

  7. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation.   Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  8. 2013 Chinese Intelligent Automation Conference

    CERN Document Server

    Deng, Zhidong

    2013-01-01

    Proceedings of the 2013 Chinese Intelligent Automation Conference presents selected research papers from the CIAC’13, held in Yangzhou, China. The topics include e.g. adaptive control, fuzzy control, neural network based control, knowledge based control, hybrid intelligent control, learning control, evolutionary mechanism based control, multi-sensor integration, failure diagnosis, and reconfigurable control. Engineers and researchers from academia, industry, and government can gain an inside view of new solutions combining ideas from multiple disciplines in the field of intelligent automation. Zengqi Sun and Zhidong Deng are professors at the Department of Computer Science, Tsinghua University, China.

  9. Automated synthesis of sialylated oligosaccharides

    Directory of Open Access Journals (Sweden)

    Davide Esposito

    2012-09-01

    Full Text Available Sialic acid-containing glycans play a major role in cell-surface interactions with external partners such as cells and viruses. Straightforward access to sialosides is required in order to study their biological functions on a molecular level. Here, automated oligosaccharide synthesis was used to facilitate the preparation of this class of biomolecules. Our strategy relies on novel sialyl α-(2→3 and α-(2→6 galactosyl imidates, which, used in combination with the automated platform, provided rapid access to a small library of conjugation-ready sialosides of biological relevance.

  10. Massively parallel femtosecond laser processing.

    Science.gov (United States)

    Hasegawa, Satoshi; Ito, Haruyasu; Toyoda, Haruyoshi; Hayasaki, Yoshio

    2016-08-01

    Massively parallel femtosecond laser processing with more than 1000 beams was demonstrated. Parallel beams were generated by a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM). The key to this technique is to optimize the CGH in the laser processing system using a scheme called in-system optimization. It was analytically demonstrated that the number of beams is determined by the horizontal number of pixels in the SLM NSLM that is imaged at the pupil plane of an objective lens and a distance parameter pd obtained by dividing the distance between adjacent beams by the diffraction-limited beam diameter. A performance limitation of parallel laser processing in our system was estimated at NSLM of 250 and pd of 7.0. Based on these parameters, the maximum number of beams in a hexagonal close-packed structure was calculated to be 1189 by using an analytical equation. PMID:27505815

  11. PARALLEL SELF-ORGANIZING MAP

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A new self-organizing map, parallel self-organizing map (PSOM), was proposed for information parallel processing purpose. In this model, there are two separate layers of neurons connected together,the number of neurons in both layer and connections between them is equal to the number of total elements of input signals, the weight updating is managed through a sequence of operations among some unitary transformation and operation matrixes, so the conventional repeated learning procedure was modified to learn just once and an algorithm was developed to realize this new learning method. With a typical classification example, the performance of PSOM demonstrated convergence results similar to Kohonen's model. Theoretic analysis and proofs also showed some interesting properties of PSOM. As it was pointed out, the contribution of such a network may not be so significant, but its parallel mode may be interesting for quantum computation.

  12. Generalized Quantum Search with Parallelism

    CERN Document Server

    Gingrich, R M; Cerf, N J; Gingrich, Robert; Williams, Colin P.; Cerf, Nicolas

    2000-01-01

    We generalize Grover's unstructured quantum search algorithm to enable it to use an arbitrary starting superposition and an arbitrary unitary matrix simultaneously. We derive an exact formula for the probability of the generalized Grover's algorithm succeeding after n iterations. We show that the fully generalized formula reduces to the special cases considered by previous authors. We then use the generalized formula to determine the optimal strategy for using the unstructured quantum search algorithm. On average the optimal strategy is about 12% better than the naive use of Grover's algorithm. The speedup obtained is not dramatic but it illustrates that a hybrid use of quantum computing and classical computing techniques can yield a performance that is better than either alone. We extend the analysis to the case of a society of k quantum searches acting in parallel. We derive an analytic formula that connects the degree of parallelism with the optimal strategy for k-parallel quantum search. We then derive th...

  13. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine;

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  14. Visualizing Parallel Computer System Performance

    Science.gov (United States)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  15. PARAVT: Parallel Voronoi tessellation code

    Science.gov (United States)

    González, R. E.

    2016-10-01

    In this study, we present a new open source code for massive parallel computation of Voronoi tessellations (VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition takes into account consistent boundary computation between tasks, and includes periodic conditions. In addition, the code computes neighbors list, Voronoi density, Voronoi cell volume, density gradient for each particle, and densities on a regular grid. Code implementation and user guide are publicly available at https://github.com/regonzar/paravt.

  16. PARAVT: Parallel Voronoi Tessellation code

    CERN Document Server

    Gonzalez, Roberto E

    2016-01-01

    We present a new open source code for massive parallel computation of Voronoi tessellations(VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition take into account consistent boundary computation between tasks, and support periodic conditions. In addition, the code compute neighbors lists, Voronoi density and Voronoi cell volumes for each particle, and can compute density on a regular grid.

  17. Simplified Automated Image Analysis for Detection and Phenotyping of Mycobacterium tuberculosis on Porous Supports by Monitoring Growing Microcolonies

    OpenAIRE

    den Hertog, Alice L.; Dennis W Visser; Ingham, Colin J.; Frank H A G Fey; Paul R Klatser; Anthony, Richard M.

    2010-01-01

    BACKGROUND: Even with the advent of nucleic acid (NA) amplification technologies the culture of mycobacteria for diagnostic and other applications remains of critical importance. Notably microscopic observed drug susceptibility testing (MODS), as opposed to traditional culture on solid media or automated liquid culture, has shown potential to both speed up and increase the provision of mycobacterial culture in high burden settings. METHODS: Here we explore the growth of Mycobacterial tubercul...

  18. An automated method for high-throughput protein purification applied to a comparison of His-tag and GST-tag affinity chromatography

    Directory of Open Access Journals (Sweden)

    Büssow Konrad

    2003-07-01

    Full Text Available Abstract Background Functional Genomics, the systematic characterisation of the functions of an organism's genes, includes the study of the gene products, the proteins. Such studies require methods to express and purify these proteins in a parallel, time and cost effective manner. Results We developed a method for parallel expression and purification of recombinant proteins with a hexahistidine tag (His-tag or glutathione S-transferase (GST-tag from bacterial expression systems. Proteins are expressed in 96-well microplates and are purified by a fully automated procedure on a pipetting robot. Up to 90 microgram purified protein can be obtained from 1 ml microplate cultures. The procedure is readily reproducible and 96 proteins can be purified in approximately three hours. It avoids clearing of crude cellular lysates and the use of magnetic affinity beads and is therefore less expensive than comparable commercial systems. We have used this method to compare purification of a set of human proteins via His-tag or GST-tag. Proteins were expressed as fusions to an N-terminal tandem His- and GST-tag and were purified by metal chelating or glutathione affinity chromatography. The purity of the obtained protein samples was similar, yet His-tag purification resulted in higher yields for some proteins. Conclusion A fully automated, robust and cost effective method was developed for the purification of proteins that can be used to quickly characterise expression clones in high throughput and to produce large numbers of proteins for functional studies. His-tag affinity purification was found to be more efficient than purification via GST-tag for some proteins.

  19. Toward designing for trust in database automation

    International Nuclear Information System (INIS)

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  20. Automated theorem proving theory and practice

    CERN Document Server

    Newborn, Monty

    2001-01-01

    As the 21st century begins, the power of our magical new tool and partner, the computer, is increasing at an astonishing rate. Computers that perform billions of operations per second are now commonplace. Multiprocessors with thousands of little computers - relatively little! -can now carry out parallel computations and solve problems in seconds that only a few years ago took days or months. Chess-playing programs are on an even footing with the world's best players. IBM's Deep Blue defeated world champion Garry Kasparov in a match several years ago. Increasingly computers are expected to be more intelligent, to reason, to be able to draw conclusions from given facts, or abstractly, to prove theorems-the subject of this book. Specifically, this book is about two theorem-proving programs, THEO and HERBY. The first four chapters contain introductory material about automated theorem proving and the two programs. This includes material on the language used to express theorems, predicate calculus, and the rules of...

  1. An automated nudged elastic band method

    Science.gov (United States)

    Kolsbjerg, Esben L.; Groves, Michael N.; Hammer, Bjørk

    2016-09-01

    A robust, efficient, dynamic, and automated nudged elastic band (AutoNEB) algorithm to effectively locate transition states is presented. The strength of the algorithm is its ability to use fewer resources than the nudged elastic band (NEB) method by focusing first on converging a rough path before improving upon the resolution around the transition state. To demonstrate its efficiency, it has been benchmarked using a simple diffusion problem and a dehydrogenation reaction. In both cases, the total number of force evaluations used by the AutoNEB method is significantly less than the NEB method. Furthermore, it is shown that for a fast and robust relaxation to the transition state, a climbing image elastic band method where the full spring force, rather than only the component parallel to the local tangent to the path, is preferred especially for pathways through energy landscapes with multiple local minima. The resulting corner cutting does not affect the accuracy of the transition state as long as this is located with the climbing image method. Finally, a number of pitfalls often encountered while locating the true transition state of a reaction are discussed in terms of systematically exploring the multidimensional energy landscape of a given process.

  2. Seeing or moving in parallel

    DEFF Research Database (Denmark)

    Christensen, Mark Schram; Ehrsson, H Henrik; Nielsen, Jens Bo

    2013-01-01

    The underlying neural mechanisms of a perceptual bias for in-phase bimanual coordination movements are not well understood. In the present study, we measured brain activity with functional magnetic resonance imaging in healthy subjects during a task, where subjects performed bimanual index finger...... a different network, involving bilateral dorsal premotor cortex (PMd), primary motor cortex, and SMA, was more active when subjects viewed parallel movements while performing either symmetrical or parallel movements. Correlations between behavioral instability and brain activity were present in right lateral...

  3. Medipix2 parallel readout system

    Science.gov (United States)

    Fanti, V.; Marzeddu, R.; Randaccio, P.

    2003-08-01

    A fast parallel readout system based on a PCI board has been developed in the framework of the Medipix collaboration. The readout electronics consists of two boards: the motherboard directly interfacing the Medipix2 chip, and the PCI board with digital I/O ports 32 bits wide. The device driver and readout software have been developed at low level in Assembler to allow fast data transfer and image reconstruction. The parallel readout permits a transfer rate up to 64 Mbytes/s. http://medipix.web.cern ch/MEDIPIX/

  4. Cellular automata a parallel model

    CERN Document Server

    Mazoyer, J

    1999-01-01

    Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.

  5. Development of Microreactor Array Chip-Based Measurement System for Massively Parallel Analysis of Enzymatic Activity

    Science.gov (United States)

    Hosoi, Yosuke; Akagi, Takanori; Ichiki, Takanori

    Microarray chip technology such as DNA chips, peptide chips and protein chips is one of the promising approaches for achieving high-throughput screening (HTS) of biomolecule function since it has great advantages in feasibility of automated information processing due to one-to-one indexing between array position and molecular function as well as massively parallel sample analysis as a benefit of down-sizing and large-scale integration. Mostly, however, the function that can be evaluated by such microarray chips is limited to affinity of target molecules. In this paper, we propose a new HTS system of enzymatic activity based on microreactor array chip technology. A prototype of the automated and massively parallel measurement system for fluorometric assay of enzymatic reactions was developed by the combination of microreactor array chips and a highly-sensitive fluorescence microscope. Design strategy of microreactor array chips and an optical measurement platform for the high-throughput enzyme assay are discussed.

  6. Automated quantification of synapses by fluorescence microscopy.

    Science.gov (United States)

    Schätzle, Philipp; Wuttke, René; Ziegler, Urs; Sonderegger, Peter

    2012-02-15

    The quantification of synapses in neuronal cultures is essential in studies of the molecular mechanisms underlying synaptogenesis and synaptic plasticity. Conventional counting of synapses based on morphological or immunocytochemical criteria is extremely work-intensive. We developed a fully automated method which quantifies synaptic elements and complete synapses based on immunocytochemistry. Pre- and postsynaptic elements are detected by their corresponding fluorescence signals and their proximity to dendrites. Synapses are defined as the combination of a pre- and postsynaptic element within a given distance. The analysis is performed in three dimensions and all parameters required for quantification can be easily adjusted by a graphical user interface. The integrated batch processing enables the analysis of large datasets without any further user interaction and is therefore efficient and timesaving. The potential of this method was demonstrated by an extensive quantification of synapses in neuronal cultures from DIV 7 to DIV 21. The method can be applied to all datasets containing a pre- and postsynaptic labeling plus a dendritic or cell surface marker.

  7. Automated quality control in a file-based broadcasting workflow

    Science.gov (United States)

    Zhang, Lina

    2014-04-01

    Benefit from the development of information and internet technologies, television broadcasting is transforming from inefficient tape-based production and distribution to integrated file-based workflows. However, no matter how many changes have took place, successful broadcasting still depends on the ability to deliver a consistent high quality signal to the audiences. After the transition from tape to file, traditional methods of manual quality control (QC) become inadequate, subjective, and inefficient. Based on China Central Television's full file-based workflow in the new site, this paper introduces an automated quality control test system for accurate detection of hidden troubles in media contents. It discusses the system framework and workflow control when the automated QC is added. It puts forward a QC criterion and brings forth a QC software followed this criterion. It also does some experiments on QC speed by adopting parallel processing and distributed computing. The performance of the test system shows that the adoption of automated QC can make the production effective and efficient, and help the station to achieve a competitive advantage in the media market.

  8. Improving automated load flexibility of nuclear power plants with ALFC

    International Nuclear Information System (INIS)

    In several German and Swiss Nuclear Power Plants with Pressurized Water Reactor (PWR) the control of the reactor power was and will be improved in order to be able to support the energy transition with increasing volatile renewable energy in the grid by flexible load operation according to the need of the load dispatcher (power system stability). Especially regarding the mentioned German NPPs with a nominal electric power of approx. 1,500 MW, the general objectives are the main automated grid relevant operation modes. The new possibilities of digital I and C (as TELEPERM registered XS) enable the automation of the operating modes provided that manual support is no longer necessary. These possibilities were and will be implemented by AREVA within the ALFC-projects. Manifold adaption algorithms to the reactor physical variations during the nuclear load cycle enable a precise control of the axial power density distribution and of the reactivity management in the reactor core. Finally this is the basis for a highly automated load flexibility with the parallel respect and surveillance of the operational limits of a PWR.

  9. Improving automated load flexibility of nuclear power plants with ALFC

    Energy Technology Data Exchange (ETDEWEB)

    Kuhn, Andreas [AREVA GmbH, Karlstein (Germany). Plant Control/Training; Klaus, Peter [E.ON NPP Isar 2, Essenbach (Germany). Plant Operation/Production Engineering

    2016-07-01

    In several German and Swiss Nuclear Power Plants with Pressurized Water Reactor (PWR) the control of the reactor power was and will be improved in order to be able to support the energy transition with increasing volatile renewable energy in the grid by flexible load operation according to the need of the load dispatcher (power system stability). Especially regarding the mentioned German NPPs with a nominal electric power of approx. 1,500 MW, the general objectives are the main automated grid relevant operation modes. The new possibilities of digital I and C (as TELEPERM {sup registered} XS) enable the automation of the operating modes provided that manual support is no longer necessary. These possibilities were and will be implemented by AREVA within the ALFC-projects. Manifold adaption algorithms to the reactor physical variations during the nuclear load cycle enable a precise control of the axial power density distribution and of the reactivity management in the reactor core. Finally this is the basis for a highly automated load flexibility with the parallel respect and surveillance of the operational limits of a PWR.

  10. Automated Ply Inspection (API) for AFP Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The Automated Ply Inspection (API) system autonomously inspects layups created by high speed automated fiber placement (AFP) machines. API comprises a high accuracy...

  11. Representing and computing regular languages on massively parallel networks

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  12. About the creation of a parallel bilingual corpora of web-publications

    OpenAIRE

    Lande, D. V.; Zhygalo, V. V.

    2008-01-01

    The algorithm of the creation texts parallel corpora was presented. The algorithm is based on the use of "key words" in text documents, and on the means of their automated translation. Key words were singled out by means of using Russian and Ukrainian morphological dictionaries, as well as dictionaries of the translation of nouns for the Russian and Ukrainianlanguages. Besides, to calculate the weights of the terms in the documents, empiric-statistic rules were used. The algorithm under consi...

  13. Experiments with parallel algorithms for combinatorial problems

    NARCIS (Netherlands)

    G.A.P. Kindervater (Gerard); H.W.J.M. Trienekens

    1985-01-01

    textabstractIn the last decade many models for parallel computation have been proposed and many parallel algorithms have been developed. However, few of these models have been realized and most of these algorithms are supposed to run on idealized, unrealistic parallel machines. The parallel machines

  14. Composition and Rhetoric Usage of Parallelism

    Institute of Scientific and Technical Information of China (English)

    祖林; 朱蕾

    2012-01-01

      Parallelism gets a rhetorical effect by means of syntactic approach. The use of parallelism makes the effect of balanced beauty between words and words, sentences and sentences, paragraphs and paragraphs. From the per-spective of semantic, all parts and components of parallelism are closely related, parallelism plays an important role in creating the rhetorical effect and strengthening the tone.

  15. Open architecture for multilingual parallel texts

    CERN Document Server

    Benitez, M T Carrasco

    2008-01-01

    Multilingual parallel texts (abbreviated to parallel texts) are linguistic versions of the same content ("translations"); e.g., the Maastricht Treaty in English and Spanish are parallel texts. This document is about creating an open architecture for the whole Authoring, Translation and Publishing Chain (ATP-chain) for the processing of parallel texts.

  16. Towards Automated System Synthesis Using SCIDUCTION

    OpenAIRE

    Jha, Susmit Kumar

    2011-01-01

    Automated synthesis of systems that are correct by construction has been a long-standing goal of computer science. Synthesis is a creative task and requires human intuition and skill. Its complete automation is currently beyond the capacity of programs that do automated reasoning. However, there is a pressing need for tools and techniques that can automate non-intuitive and error-prone synthesis tasks. This thesis proposes a novel synthesis approach to solve such tasks in the synthesis of pro...

  17. Automated activation-analysis system

    International Nuclear Information System (INIS)

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. The system and its mode of operation for a large reconnaissance survey are described

  18. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    This paper presents new evidence on trade‐induced automation in manufacturing firms using unique data combining a retrospective survey that we have assembled with register data for 2005‐2010. In particular, we establish a causal effect where firms that have specialized in product types for which ...

  19. Feasibility Analysis of Crane Automation

    Institute of Scientific and Technical Information of China (English)

    DONG Ming-xiao; MEI Xue-song; JIANG Ge-dong; ZHANG Gui-qing

    2006-01-01

    This paper summarizes the modeling methods, open-loop control and closed-loop control techniques of various forms of cranes, worldwide, and discusses their feasibilities and limitations in engineering. Then the dynamic behaviors of cranes are analyzed. Finally, we propose applied modeling methods and feasible control techniques and demonstrate the feasibilities of crane automation.

  20. Automation of Feynman diagram evaluations

    International Nuclear Information System (INIS)

    A C-program DIANA (DIagram ANAlyser) for the automation of Feynman diagram evaluations is presented. It consists of two parts: the analyzer of diagrams and the interpreter of a special text manipulating language. This language can be used to create a source code for analytical or numerical evaluations and to keep the control of the process in general

  1. Automated Clustering of Similar Amendments

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The Italian Senate is clogged by computer-generated amendments. This talk will describe a simple strategy to cluster them in an automated fashion, so that the appropriate Senate procedures can be used to get rid of them in one sweep.

  2. Automated visual inspection of textile

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1997-01-01

    A method for automated inspection of two types of textile is presented. The goal of the inspection is to determine defects in the textile. A prototype is constructed for simulating the textile production line. At the prototype the images of the textile are acquired by a high speed line scan camera...

  3. Distribution system analysis and automation

    CERN Document Server

    Gers, Juan

    2013-01-01

    A comprehensive guide to techniques that allow engineers to simulate, analyse and optimise power distribution systems which combined with automation, underpin the emerging concept of the "smart grid". This book is supported by theoretical concepts with real-world applications and MATLAB exercises.

  4. Teacherbot: Interventions in Automated Teaching

    Science.gov (United States)

    Bayne, Sian

    2015-01-01

    Promises of "teacher-light" tuition and of enhanced "efficiency" via the automation of teaching have been with us since the early days of digital education, sometimes embraced by academics and institutions, and sometimes resisted as a set of moves which are damaging to teacher professionalism and to the humanistic values of…

  5. Automation, Labor Productivity and Employment

    DEFF Research Database (Denmark)

    Kromann, Lene; Rose Skaksen, Jan; Sørensen, Anders

    CEBR fremlægger nu den første rapport i AIM-projektet. Rapporten viser, at der er gode muligheder for yderligere automation i en stor del af de danske fremstillingsvirksomheder. For i dag er gennemsnitligt kun omkring 30 % af virksomhedernes produktionsprocesser automatiserede. Navnlig procesområ...

  6. Adaptation : A Partially Automated Approach

    NARCIS (Netherlands)

    Manjing, Tham; Bukhsh, F.A.; Weigand, H.

    2014-01-01

    This paper showcases the possibility of creating an adaptive auditing system. Adaptation in an audit environment need human intervention at some point. Based on a case study this paper focuses on automation of adaptation process. It is divided into solution design and validation parts. The artifact

  7. Automation of Space Inventory Management

    Science.gov (United States)

    Fink, Patrick W.; Ngo, Phong; Wagner, Raymond; Barton, Richard; Gifford, Kevin

    2009-01-01

    This viewgraph presentation describes the utilization of automated space-based inventory management through handheld RFID readers and BioNet Middleware. The contents include: 1) Space-Based INventory Management; 2) Real-Time RFID Location and Tracking; 3) Surface Acoustic Wave (SAW) RFID; and 4) BioNet Middleware.

  8. Automation; The New Industrial Revolution.

    Science.gov (United States)

    Arnstein, George E.

    Automation is a word that describes the workings of computers and the innovations of automatic transfer machines in the factory. As the hallmark of the new industrial revolution, computers displace workers and create a need for new skills and retraining programs. With improved communication between industry and the educational community to…

  9. Informativeness of Parallel Kalman Filters

    OpenAIRE

    Hajiyev, Chingiz

    2004-01-01

    This article considers the informativeness of parallel Kalman filters. Expressions are derived for determination of the amount of information obtained by additional measurements with a reserved measurement channel during processing. The theorems asserting that there is an increase in the informativeness of Kalman filters when there is a failure-free reserved measurement channel are proved.

  10. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  11. Einstein: Distant parallelism and electromagnetism

    Energy Technology Data Exchange (ETDEWEB)

    Israelit, M.; Rosen, N.

    1985-03-01

    Einstein's approach to unified field theories based on the geometry of distant parallelism is discussed. The simplest theory of this type, describing gravitation and electromagnetism, is investigated. It is found that there is a charge-current density vector associated with the geometry. However, in the static spherically symmetric case no singularity-free solutions for this vector exist.

  12. Matpar: Parallel Extensions for MATLAB

    Science.gov (United States)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  13. Parallel Sparse Matrix - Vector Product

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd

    This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...

  14. New Methodologies for Parallel Architecture

    Institute of Scientific and Technical Information of China (English)

    Dong-Rui Fan; Xiao-Wei Li; Guo-Jie Li

    2011-01-01

    Moore's law continues to grant computer architects ever more transistors in the foreseeable future,and parallelism is the key to continued performance scaling in modern microprocessors.In this paper,the achievements in our research project,which is supported by the National Basic Research 973 Program of China,on parallel architecture,are systematically presented.The innovative approaches and techniques to solve the significant problems in parallel architecture design are summarized,including architecture level optimization,compiler and languag~supported technologies,reliability,power-performance efficient design,test and verification challenges,and platform building.Two prototype chips,a multiheavy-core Godson-3 and a many-light-core Godson-T,are described to demonstrate the highly scalable and reconfigurable parallel architecture designs.We also present some of our achievements appearing in ISCA,MICRO,ISSCC,HPCA,PLDI,PACT,IJCAI,Hot Chips,DATE,IEEE Trans.VLSI,IEEE Micro,IEEE Trans.Computers,etc.

  15. Illinois: Library Automation and Connectivity Initiatives.

    Science.gov (United States)

    Lamont, Bridget L.; Bloomberg, Kathleen L.

    1996-01-01

    Discussion of library automation in Illinois focuses on ILLINET, the Illinois Library and Information Network. Topics include automated resource sharing; ILLINET's online catalog; regional library system automation; community networking and public library technology development; telecommunications initiatives; electronic access to state government…

  16. You're a What? Automation Technician

    Science.gov (United States)

    Mullins, John

    2010-01-01

    Many people think of automation as laborsaving technology, but it sure keeps Jim Duffell busy. Defined simply, automation is a technique for making a device run or a process occur with minimal direct human intervention. But the functions and technologies involved in automated manufacturing are complex. Nearly all functions, from orders coming in…

  17. Use Computer-Aided Tools to Parallelize Large CFD Applications

    Science.gov (United States)

    Jin, H.; Frumkin, M.; Yan, J.

    2000-01-01

    Porting applications to high performance parallel computers is always a challenging task. It is time consuming and costly. With rapid progressing in hardware architectures and increasing complexity of real applications in recent years, the problem becomes even more sever. Today, scalability and high performance are mostly involving handwritten parallel programs using message-passing libraries (e.g. MPI). However, this process is very difficult and often error-prone. The recent reemergence of shared memory parallel (SMP) architectures, such as the cache coherent Non-Uniform Memory Access (ccNUMA) architecture used in the SGI Origin 2000, show good prospects for scaling beyond hundreds of processors. Programming on an SMP is simplified by working in a globally accessible address space. The user can supply compiler directives, such as OpenMP, to parallelize the code. As an industry standard for portable implementation of parallel programs for SMPs, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran, C and C++ to express shared memory parallelism. It promises an incremental path for parallel conversion of existing software, as well as scalability and performance for a complete rewrite or an entirely new development. Perhaps the main disadvantage of programming with directives is that inserted directives may not necessarily enhance performance. In the worst cases, it can create erroneous results. While vendors have provided tools to perform error-checking and profiling, automation in directive insertion is very limited and often failed on large programs, primarily due to the lack of a thorough enough data dependence analysis. To overcome the deficiency, we have developed a toolkit, CAPO, to automatically insert OpenMP directives in Fortran programs and apply certain degrees of optimization. CAPO is aimed at taking advantage of detailed inter-procedural dependence analysis provided by CAPTools, developed by the University of

  18. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  19. Lab on a chip automates in vitro cell culturing

    DEFF Research Database (Denmark)

    Perozziello, Gerardo; Møllenbach, Jacob; Laursen, Steen;

    2012-01-01

    A novel in vitro fertilization system is presented based on an incubation chamber and a microfluidic device which serves as advanced microfluidic cultivation chamber. The flow is controlled by hydrostatic height differences and evaporation is avoided with help of mineral oil. Six patient compartm...

  20. A Survey of Parallel Data Mining

    OpenAIRE

    Freitas, Alex A

    1998-01-01

    With the fast, continuous increase in the number and size of databases, parallel data mining is a natural and cost-effective approach to tackle the problem of scalability in data mining. Recently there has been a considerable research on parallel data mining. However, most projects focus on the parallelization of a single kind of data mining algorithm/paradigm. This paper surveys parallel data mining with a broader perspective. More precisely, we discuss the parallelization of ...

  1. Evaluation of the measurement uncertainty in automated long-term sampling of PCDD/PCDFs.

    Science.gov (United States)

    Vicaretti, M; D'Emilia, G; Mosca, S; Guerriero, E; Rotatori, M

    2013-12-01

    Since the publication of the first version of European standard EN-1948 in 1996, long-term sampling equipment has been improved to a high standard for the sampling and analysis of polychlorodibenzo-p-dioxin (PCDD)/polychlorodibenzofuran (PCDF) emissions from industrial sources. The current automated PCDD/PCDF sampling systems enable to extend the measurement time from 6-8 h to 15-30 days in order to have data values better representative of the real pollutant emission of the plant in the long period. EN-1948:2006 is still the European technical reference standard for the determination of PCDD/PCDF from stationary source emissions. In this paper, a methodology to estimate the measurement uncertainty of long-term automated sampling is presented. The methodology has been tested on a set of high concentration sampling data resulting from a specific experience; it is proposed with the intent that it is to be applied on further similar studies and generalized. A comparison between short-term sampling data resulting from manual and automated parallel measurements has been considered also in order to verify the feasibility and usefulness of automated systems and to establish correlations between results of the two methods to use a manual method for calibration of automatic long-term one. The uncertainty components of the manual method are analyzed, following the requirements of EN-1948-3:2006, allowing to have a preliminary evaluation of the corresponding uncertainty components of the automated system. Then, a comparison between experimental data coming from parallel sampling campaigns carried out in short- and long-term sampling periods is realized. Long-term sampling is more reliable to monitor PCDD/PCDF emissions than occasional short-term sampling. Automated sampling systems can assure very useful emission data both in short and long sampling periods. Despite this, due to the different application of the long-term sampling systems, the automated results could not be

  2. Prevalence of discordant microscopic changes with automated CBC analysis

    Directory of Open Access Journals (Sweden)

    Fabiano de Jesus Santos

    2014-12-01

    Full Text Available Introduction:The most common cause of diagnostic error is related to errors in laboratory tests as well as errors of results interpretation. In order to reduce them, the laboratory currently has modern equipment which provides accurate and reliable results. The development of automation has revolutionized the laboratory procedures in Brazil and worldwide.Objective:To determine the prevalence of microscopic changes present in blood slides concordant and discordant with results obtained using fully automated procedures.Materials and method:From January to July 2013, 1,000 hematological parameters slides were analyzed. Automated analysis was performed on last generation equipment, which methodology is based on electrical impedance, and is able to quantify all the figurative elements of the blood in a universe of 22 parameters. The microscopy was performed by two experts in microscopy simultaneously.Results:The data showed that only 42.70% were concordant, comparing with 57.30% discordant. The main findings among discordant were: Changes in red blood cells 43.70% (n = 250, white blood cells 38.46% (n = 220, and number of platelet 17.80% (n = 102.Discussion:The data show that some results are not consistent with clinical or physiological state of an individual, and cannot be explained because they have not been investigated, which may compromise the final diagnosis.Conclusion:It was observed that it is of fundamental importance that the microscopy qualitative analysis must be performed in parallel with automated analysis in order to obtain reliable results, causing a positive impact on the prevention, diagnosis, prognosis, and therapeutic follow-up.

  3. Automated target recognition and tracking using an optical pattern recognition neural network

    Science.gov (United States)

    Chao, Tien-Hsin

    1991-01-01

    The on-going development of an automatic target recognition and tracking system at the Jet Propulsion Laboratory is presented. This system is an optical pattern recognition neural network (OPRNN) that is an integration of an innovative optical parallel processor and a feature extraction based neural net training algorithm. The parallel optical processor provides high speed and vast parallelism as well as full shift invariance. The neural network algorithm enables simultaneous discrimination of multiple noisy targets in spite of their scales, rotations, perspectives, and various deformations. This fully developed OPRNN system can be effectively utilized for the automated spacecraft recognition and tracking that will lead to success in the Automated Rendezvous and Capture (AR&C) of the unmanned Cargo Transfer Vehicle (CTV). One of the most powerful optical parallel processors for automatic target recognition is the multichannel correlator. With the inherent advantages of parallel processing capability and shift invariance, multiple objects can be simultaneously recognized and tracked using this multichannel correlator. This target tracking capability can be greatly enhanced by utilizing a powerful feature extraction based neural network training algorithm such as the neocognitron. The OPRNN, currently under investigation at JPL, is constructed with an optical multichannel correlator where holographic filters have been prepared using the neocognitron training algorithm. The computation speed of the neocognitron-type OPRNN is up to 10(exp 14) analog connections/sec that enabling the OPRNN to outperform its state-of-the-art electronics counterpart by at least two orders of magnitude.

  4. Two Level Parallel Grammatical Evolution

    Science.gov (United States)

    Ošmera, Pavel

    This paper describes a Two Level Parallel Grammatical Evolution (TLPGE) that can evolve complete programs using a variable length linear genome to govern the mapping of a Backus Naur Form grammar definition. To increase the efficiency of Grammatical Evolution (GE) the influence of backward processing was tested and a second level with differential evolution was added. The significance of backward coding (BC) and the comparison with standard coding of GEs is presented. The new method is based on parallel grammatical evolution (PGE) with a backward processing algorithm, which is further extended with a differential evolution algorithm. Thus a two-level optimization method was formed in attempt to take advantage of the benefits of both original methods and avoid their difficulties. Both methods used are discussed and the architecture of their combination is described. Also application is discussed and results on a real-word application are described.

  5. Structural synthesis of parallel robots

    CERN Document Server

    Gogu, Grigore

    This book represents the fifth part of a larger work dedicated to the structural synthesis of parallel robots. The originality of this work resides in the fact that it combines new formulae for mobility, connectivity, redundancy and overconstraints with evolutionary morphology in a unified structural synthesis approach that yields interesting and innovative solutions for parallel robotic manipulators.  This is the first book on robotics that presents solutions for coupled, decoupled, uncoupled, fully-isotropic and maximally regular robotic manipulators with Schönflies motions systematically generated by using the structural synthesis approach proposed in Part 1.  Overconstrained non-redundant/overactuated/redundantly actuated solutions with simple/complex limbs are proposed. Many solutions are presented here for the first time in the literature. The author had to make a difficult and challenging choice between protecting these solutions through patents and releasing them directly into the public domain. T...

  6. Parallel supercomputing with commodity components

    Energy Technology Data Exchange (ETDEWEB)

    Warren, M.S.; Goda, M.P. [Los Alamos National Lab., NM (United States); Becker, D.J. [Goddard Space Flight Center, Greenbelt, MD (United States)] [and others

    1997-09-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  7. The Origins of Geek Culture

    DEFF Research Database (Denmark)

    Konzack, Lars

    2014-01-01

    Artiklen handler om, hvordan der I løbet af det tyvende århundrede opstår et parallel intellektuelt miljø omkring science fiction, fantasy, horror, superhelte, rolle- og brætspil som et modtræk til, at denne culture I praksis er udelukket fra de etablerede humanistiske forskningsmiljøer....

  8. Qualification of an automated device to objectively assess the effect of hair care products on hair shine.

    Science.gov (United States)

    Hagens, Ralf; Wiersbinski, Tim; Becker, Michael E; Weisshaar, Jürgen; Schreiner, Volker; Wenck, Horst

    2011-01-01

    The authors developed and qualified an automated routine screening tool to quantify hair shine. This tool is able to separately record individual properties of hair shine such as specular reflection and multiple reflection, as well as additional features such as sparkle, parallelism of hair fibers, and hair color, which strongly affect the subjective ranking by individual readers. A side-by-side comparison of different hair care and styling products with regard to hair shine using the automated screening tool in parallel with standard panel assessment showed that the automated system provides an almost identical ranking and the same statistical significances as the panel assessment. Provided stringent stratification of hair fibers for color and parallelism, the automated tool competes favorably with panel assessments of hair shine. In this case, data generated with the opsira Shine-Box are clearly superior over data generated by panel assessment in terms of reliability and repeatability, workload and time consumption, and sensitivity and specificity to detect differences after shampoo, conditioner, and leave-in treatment. The automated tool is therefore well suited to replace standard panel assessments in claim support, at least as a screening tool. A further advantage of the automated system over panel assessments is the fact that absolute numeric values are generated for a given hair care product, whereas panel assessments can only give rankings of a series of hair care products included in the same study. Thus, the absolute numeric data generated with the automated system allow comparison of hair care products between studies or at different time points after treatment.

  9. Preconditioned method in parallel computation

    Institute of Scientific and Technical Information of China (English)

    Wu Ruichan; Wei Jianing

    2006-01-01

    The grid equations in decomposed domain by parallel computation are soled, and a method of local orthogonalization to solve the large-scaled numerical computation is presented. It constructs preconditioned iteration matrix by the combination of predigesting LU decomposition and local orthogonalization, and the convergence of solution is proved. Indicated from the example, this algorithm can increase the rate of computation efficiently and it is quite stable.

  10. Combinatorics of spreads and parallelisms

    CERN Document Server

    Johnson, Norman

    2010-01-01

    Partitions of Vector Spaces Quasi-Subgeometry Partitions Finite Focal-SpreadsGeneralizing André SpreadsThe Going Up Construction for Focal-SpreadsSubgeometry Partitions Subgeometry and Quasi-Subgeometry Partitions Subgeometries from Focal-SpreadsExtended André SubgeometriesKantor's Flag-Transitive DesignsMaximal Additive Partial SpreadsSubplane Covered Nets and Baer Groups Partial Desarguesian t-Parallelisms Direct Products of Affine PlanesJha-Johnson SL(2,

  11. Shot noise in parallel wires

    OpenAIRE

    Lagerqvist, Johan; Chen, Yu-Chang; Di Ventra, Massimiliano

    2004-01-01

    We report first-principles calculations of shot noise properties of parallel carbon wires in the regime in which the interwire distance is much smaller than the inelastic mean free path. We find that, with increasing interwire distance, the current approaches rapidly a value close to twice the current of each wire, while the Fano factor, for the same distances, is still larger than the Fano factor of a single wire. This enhanced Fano factor is the signature of the correlation between electron...

  12. Lightweight Specifications for Parallel Correctness

    OpenAIRE

    Burnim, Jacob Samuels

    2012-01-01

    With the spread of multicore processors, it is increasingly necessaryfor programmers to write parallel software. Yet writing correctparallel software with explicit multithreading remains a difficultundertaking. Though many tools exist to help test, debug, and verifyparallel programs, such tools are often hindered by a lack of anyspecification from the programmer of the intended, correct parallelbehavior of his or her software.In this dissertation, we propose three novel lightweightspecificati...

  13. Distributed and Parallel Component Library

    Institute of Scientific and Technical Information of China (English)

    XU Zheng-quan; XU Yang; YAN Ai-ping

    2005-01-01

    Software component library is the essential part of reuse-based software development. It is shown that making use of a single component library to store all kinds of components and from which components are searched is very inefficient. We construct multi-libraries to support software reuse and use PVM as development environments to imitate largescale computer, which is expected to fulfill distributed storage and parallel search of components efficiently and improve software reuse.

  14. Fecal culture

    Science.gov (United States)

    Stool culture; Culture - stool ... stool tests are done in addition to the culture, such as: Gram stain of stool Fecal smear ... Giannella RA. Infectious enteritis and proctocolitis and bacterial food poisoning. In: Feldman M, Friedman LS, Brandt LJ, ...

  15. Urine culture

    Science.gov (United States)

    Culture and sensitivity - urine ... when urinating. You also may have a urine culture after you have been treated for an infection. ... when bacteria or yeast are found in the culture. This likely means that you have a urinary ...

  16. An Overview of Moonlight Applications Test Automation

    Directory of Open Access Journals (Sweden)

    Appasami Govindasamy

    2010-09-01

    Full Text Available Now-a-days web applications are developed by new technologies like Moonlight, Silverlight, JAVAFX, FLEX, etc. Silverlight is Microsoft's cross platform runtime and development technology for running Web-based multimedia applications in windows platform. Moonlight is an open-source implementation of the Silverlight development platform for Linux and other Unix/X11-based operating systems. It is a new technology in .Net 4.0 to develop rich interactive and attractive platform independent web applications. User Interface Test Automation is very essential for Software industries to reduce test time, cost and man power. Moonlight is new .NET technology to develop rich interactive Internet applications with the collaboration of Novel Corporation. Testing these kinds of applications are not so easy to test, especially the User interface test automation is very difficult. Software test automation has the capability to decrease the overall cost of testing and improve software quality, but most testing organizations have not been able to achieve the full potential of test automation. Many groups that implement test automation programs run into a number of common pitfalls. These problems can lead to test automation plans being completely scrapped, with the tools purchased for test automation becoming expensive. Often teams continue their automation effort, burdened with huge costs in maintaining large suites of automated test scripts. This paper will first discuss some of the key benefits of software test automation, and then examine the most common techniques used to implement software test automation of Moonlight Applications Test Automation. It will then discuss test automation and their potential. Finally, it will do test automation.

  17. Automatic Performance Debugging of SPMD-style Parallel Programs

    CERN Document Server

    Liu, Xu; Zhan, Kunlin; Shi, Weisong; Yuan, Lin; Meng, Dan; Wang, Lei

    2011-01-01

    The simple program and multiple data (SPMD) programming model is widely used for both high performance computing and Cloud computing. In this paper, we design and implement an innovative system, AutoAnalyzer, that automates the process of debugging performance problems of SPMD-style parallel programs, including data collection, performance behavior analysis, locating bottlenecks, and uncovering their root causes. AutoAnalyzer is unique in terms of two features: first, without any apriori knowledge, it automatically locates bottlenecks and uncovers their root causes for performance optimization; second, it is lightweight in terms of the size of performance data to be collected and analyzed. Our contributions are three-fold: first, we propose two effective clustering algorithms to investigate the existence of performance bottlenecks that cause process behavior dissimilarity or code region behavior disparity, respectively; meanwhile, we present two searching algorithms to locate bottlenecks; second, on a basis o...

  18. Automated methods of corrosion measurement

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov; Bech-Nielsen, Gregers; Reeve, John Ch;

    1997-01-01

    Measurements of corrosion rates and other parameters connected with corrosion processes are important, first as indicators of the corrosion resistance of metallic materials and second because such measurements are based on general and fundamental physical, chemical, and electrochemical relations....... Hence improvements and innovations in methods applied in corrosion research are likeliy to benefit basic disciplines as well. A method for corrosion measurements can only provide reliable data if the beckground of the method is fully understood. Failure of a method to give correct data indicates a need...... to revise assumptions regarding the basis of the method, which sometimes leads to the discovery of as-yet unnoticed phenomena. The present selection of automated methods for corrosion measurements is not motivated simply by the fact that a certain measurement can be performed automatically. Automation...

  19. CCD characterization and measurements automation

    International Nuclear Information System (INIS)

    Modern mosaic cameras have grown both in size and in number of sensors. The required volume of sensor testing and characterization has grown accordingly. For camera projects as large as the LSST, test automation becomes a necessity. A CCD testing and characterization laboratory was built and is in operation for the LSST project. Characterization of LSST study contract sensors has been performed. The characterization process and its automation are discussed, and results are presented. Our system automatically acquires images, populates a database with metadata information, and runs express analysis. This approach is illustrated on 55Fe data analysis. 55Fe data are used to measure gain, charge transfer efficiency and charge diffusion. Examples of express analysis results are presented and discussed.

  20. Automated nanomanipulation for nanodevice construction

    International Nuclear Information System (INIS)

    Nanowire field-effect transistors (nano-FETs) are nanodevices capable of highly sensitive, label-free sensing of molecules. However, significant variations in sensitivity across devices can result from poor control over device parameters, such as nanowire diameter and the number of electrode-bridging nanowires. This paper presents a fabrication approach that uses wafer-scale nanowire contact printing for throughput and uses automated nanomanipulation for precision control of nanowire number and diameter. The process requires only one photolithography mask. Using nanowire contact printing and post-processing (i.e. nanomanipulation inside a scanning electron microscope), we are able to produce devices all with a single-nanowire and similar diameters at a speed of ∼1 min/device with a success rate of 95% (n = 500). This technology represents a seamless integration of wafer-scale microfabrication and automated nanorobotic manipulation for producing nano-FET sensors with consistent response across devices. (paper)

  1. DOLFIN: Automated Finite Element Computing

    CERN Document Server

    Logg, Anders; 10.1145/1731022.1731030

    2011-01-01

    We describe here a library aimed at automating the solution of partial differential equations using the finite element method. By employing novel techniques for automated code generation, the library combines a high level of expressiveness with efficient computation. Finite element variational forms may be expressed in near mathematical notation, from which low-level code is automatically generated, compiled and seamlessly integrated with efficient implementations of computational meshes and high-performance linear algebra. Easy-to-use object-oriented interfaces to the library are provided in the form of a C++ library and a Python module. This paper discusses the mathematical abstractions and methods used in the design of the library and its implementation. A number of examples are presented to demonstrate the use of the library in application code.

  2. Automated illustration of patients instructions.

    Science.gov (United States)

    Bui, Duy; Nakamura, Carlos; Bray, Bruce E; Zeng-Treitler, Qing

    2012-01-01

    A picture can be a powerful communication tool. However, creating pictures to illustrate patient instructions can be a costly and time-consuming task. Building on our prior research in this area, we developed a computer application that automatically converts text to pictures using natural language processing and computer graphics techniques. After iterative testing, the automated illustration system was evaluated using 49 previously unseen cardiology discharge instructions. The completeness of the system-generated illustrations was assessed by three raters using a three-level scale. The average inter-rater agreement for text correctly represented in the pictograph was about 66 percent. Since illustration in this context is intended to enhance rather than replace text, these results support the feasibility of conducting automated illustration.

  3. Safeguards Culture

    Energy Technology Data Exchange (ETDEWEB)

    Frazar, Sarah L.; Mladineo, Stephen V.

    2012-07-01

    The concepts of nuclear safety and security culture are well established; however, a common understanding of safeguards culture is not internationally recognized. Supported by the National Nuclear Security Administration, the authors prepared this report, an analysis of the concept of safeguards culture, and gauged its value to the safeguards community. The authors explored distinctions between safeguards culture, safeguards compliance, and safeguards performance, and evaluated synergies and differences between safeguards culture and safety/security culture. The report concludes with suggested next steps.

  4. Easy and Effective Parallel Programmable ETL

    DEFF Research Database (Denmark)

    Thomsen, Christian; Pedersen, Torben Bach

    2011-01-01

    , typically the case that the ETL program can exploit both task parallelism and data parallelism to run faster. This, on the other hand, makes the development time longer as it is complex to create a parallel ETL program. To remedy this situation, we propose efficient ways to parallelize typical ETL tasks...... and we implement these new constructs in an ETL framework. The constructs are easy to apply and do only require few modifications to an ETL program to parallelize it. They support both task and data parallelism and give the programmer different possibilities to choose from. An experimental evaluation...

  5. Automated medical image segmentation techniques

    OpenAIRE

    Sharma Neeraj; Aggarwal Lalit

    2010-01-01

    Accurate segmentation of medical images is a key step in contouring during radiotherapy planning. Computed topography (CT) and Magnetic resonance (MR) imaging are the most widely used radiographic techniques in diagnosis, clinical studies and treatment planning. This review provides details of automated segmentation methods, specifically discussed in the context of CT and MR images. The motive is to discuss the problems encountered in segmentation of CT and MR images, and the relative merits ...

  6. Automated Periodontal Diseases Classification System

    OpenAIRE

    Aliaa A. A. Youssif; Abeer Saad Gawish,; Mohammed Elsaid Moussa

    2012-01-01

    This paper presents an efficient and innovative system for automated classification of periodontal diseases, The strength of our technique lies in the fact that it incorporates knowledge from the patients' clinical data, along with the features automatically extracted from the Haematoxylin and Eosin (H&E) stained microscopic images. Our system uses image processing techniques based on color deconvolution, morphological operations, and watershed transforms for epithelium & connective tissue se...

  7. Home automation in the workplace.

    Science.gov (United States)

    McCormack, J E; Tello, S F

    1994-01-01

    Environmental control units and home automation devices contribute to the independence and potential of individuals with disabilities, both at work and at home. Devices currently exist that can assist people with physical, cognitive, and sensory disabilities to control lighting, appliances, temperature, security, and telephone communications. This article highlights several possible applications for these technologies and discusses emerging technologies that will increase the benefits these devices offer people with disabilities. PMID:24440955

  8. Small Business Innovations (Automated Information)

    Science.gov (United States)

    1992-01-01

    Bruce G. Jackson & Associates Document Director is an automated tool that combines word processing and database management technologies to offer the flexibility and convenience of text processing with the linking capability of database management. Originally developed for NASA, it provides a means to collect and manage information associated with requirements development. The software system was used by NASA in the design of the Assured Crew Return Vehicle, as well as by other government and commercial organizations including the Southwest Research Institute.

  9. Automated Scheduling Via Artificial Intelligence

    Science.gov (United States)

    Biefeld, Eric W.; Cooper, Lynne P.

    1991-01-01

    Artificial-intelligence software that automates scheduling developed in Operations Mission Planner (OMP) research project. Software used in both generation of new schedules and modification of existing schedules in view of changes in tasks and/or available resources. Approach based on iterative refinement. Although project focused upon scheduling of operations of scientific instruments and other equipment aboard spacecraft, also applicable to such terrestrial problems as scheduling production in factory.

  10. System of automated map design

    International Nuclear Information System (INIS)

    Preprint 'System of automated map design' contains information about the program shell for construction of territory map, performing level line drawing of arbitrary two-dimension field (in particular, the radionuclide concentration field). The work schedule and data structures are supplied, as well as data on system performance. The preprint can become useful for experts in radioecology and for all persons involved in territory pollution mapping or multi-purpose geochemical mapping. (author)

  11. Method development in automated mineralogy

    OpenAIRE

    Sandmann, Dirk

    2015-01-01

    The underlying research that resulted in this doctoral dissertation was performed at the Division of Economic Geology and Petrology of the Department of Mineralogy, TU Bergakademie Freiberg between 2011 and 2014. It was the primary aim of this thesis to develop and test novel applications for the technology of ‘Automated Mineralogy’ in the field of economic geology and geometallurgy. A “Mineral Liberation Analyser” (MLA) instrument of FEI Company was used to conduct most analytical studies. T...

  12. GUI test automation with SWTBot

    OpenAIRE

    Mazurkiewicz, Milosz

    2010-01-01

    In this thesis the author presents theoretical background of GUI test automation as well as technologies, tools and methodologies required to fully understand the test program written in SWTBot. Practical part of the thesis was to implement a program testing File Menu options of Pegasus RCP application developed in Nokia Siemens Networks. Concluding this dissertation, in the author’s opinion test programs written using SWTBot are relatively easy to read and intuitive for people familiar w...

  13. Automated minimax design of networks

    DEFF Research Database (Denmark)

    Madsen, Kaj; Schjær-Jacobsen, Hans; Voldby, J

    1975-01-01

    A new gradient algorithm for the solution of nonlinear minimax problems has been developed. The algorithm is well suited for automated minimax design of networks and it is very simple to use. It compares favorably with recent minimax and leastpth algorithms. General convergence problems related...... to minimax design of networks are discussed. Finally, minimax design of equalization networks for reflectiontype microwave amplifiers is carried out by means of the proposed algorithm....

  14. Adaptation: A Partially Automated Approach

    OpenAIRE

    Manjing, Tham; Bukhsh, F.A.; Weigand, H.

    2014-01-01

    This paper showcases the possibility of creating an adaptive auditing system. Adaptation in an audit environment need human intervention at some point. Based on a case study this paper focuses on automation of adaptation process. It is divided into solution design and validation parts. The artifact design is developed around import procedures of M-company. An overview of the artefact is discussed in detail to fully describes the adaptation mechanism with automatic adjustment for compliance re...

  15. The Experience of Large Computational Programs Parallelization. Parallel Version of MINUIT Program

    CERN Document Server

    Sapozhnikov, A P

    2003-01-01

    The problems around large computational programs parallelization are discussed. As an example a parallel version of MINUIT, widely used program for minimization, is introduced. The given results of MPI-based MINUIT testing on multiprocessor system demonstrate really reached parallelism.

  16. Automated Synthesis of a Library of Triazolated 1,2,5-Thiadiazepane 1,1-Dioxides via a Double aza-Michael Strategy

    Science.gov (United States)

    Zang, Qin; Javed, Salim; Hill, David; Ullah, Farman; Bi, Danse; Porubsky, Patrick; Neuenswander, Benjamin; Lushington, Gerald H.; Santini, Conrad; Organ, Michael G.; Hanson, Paul R.

    2013-01-01

    The construction of a 96-member library of triazolated 1,2,5-thiadiazepane 1,1-dioxides was performed on a Chemspeed Accelerator (SLT-100) automated parallel synthesis platform, culminating in the successful preparation of 94 out of 96 possible products. The key step, a one-pot, sequential elimination, double-aza-Michael reaction, and [3+2] Huisgen cycloaddition pathway has been automated and utilized in the production of two sets of triazolated sultam products. PMID:22853708

  17. Organizational Culture

    Directory of Open Access Journals (Sweden)

    Adrian HUDREA

    2006-02-01

    Full Text Available Cultural orientations of an organization can be its greatest strength, providing the basis for problem solving, cooperation, and communication. Culture, however, can also inhibit needed changes. Cultural changes typically happen slowly – but without cultural change, many other organizational changes are doomed to fail. The dominant culture of an organization is a major contributor to its success. But, of course, no organizational culture is purely one type or another. And the existence of secondary cultures can provide the basis for change. Therefore, organizations need to understand the cultural environments and values.

  18. Cloud identification using genetic algorithms and massively parallel computation

    Science.gov (United States)

    Buckles, Bill P.; Petry, Frederick E.

    1996-01-01

    As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user

  19. Vision feedback driven automated assembly of photopolymerized structures by parallel optical trapping and manipulation

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Perch-Nielsen, Ivan Ryberg; Rodrigo, Peter John;

    2007-01-01

    We demonstrate how optical trapping and manipulation can be used to assemble microstructures. The microstructures we show being automatically recognized and manipulated are produced using the two-photon polymerization (2PP) technique with submicron resolution. In this work, we show identical shape......-complementary puzzle pieces being manipulated in a fluidic environment forming space-filling tessellations. By implementation of image analysis to detect the puzzle pieces, we developed a system capable of assembling a puzzle with no user interaction required. This allows for automatic gathering of sparsely scattered...

  20. Routine tests and automated urinalysis in patients with suspected urinary tract infection at the ED

    NARCIS (Netherlands)

    Middelkoop, S J M; van Pelt, L J; Kampinga, G A; Ter Maaten, J C; Stegeman, C A

    2016-01-01

    BACKGROUND: Urinary tract infections (UTIs) are frequently encountered. Diagnostics of UTI (urine dipstick, Gram stain, urine culture) lack proven accuracy and precision in the emergency department. Utility of automated urinalysis shows promise for UTI diagnosis but has not been validated. METHODS:

  1. Rapid, automated identification of novobiocin-resistant, coagulase-negative staphylococci.

    OpenAIRE

    Mendes, C. M.; Siqueira, L F; Francisco, W

    1985-01-01

    A modified automated method that uses the MS-2 system (Abbott Laboratories, Diagnostics Div., Irving, Tex.) to verify the reaction of coagulase-negative staphylococci to novobiocin is described. This technique permits the testing of a great number of specimens in an average time of 99 min and results in a 100% match with the traditional method of culturing.

  2. Thermophilic Campylobacter spp. in turkey samples: evaluation of two automated enzyme immunoassays and conventional microbiological techniques

    DEFF Research Database (Denmark)

    Borck, Birgitte; Stryhn, H.; Ersboll, A.K.;

    2002-01-01

    Aims: To determine the sensitivity and specificity of two automated enzyme immunoassays (EIA), EiaFoss and Minividas, and a conventional microbiological culture technique for detecting thermophilic Campylobacter spp. in turkey samples. Methods and Results: A total of 286 samples (faecal, meat...

  3. Mainstreaming culture in psychology.

    Science.gov (United States)

    Cheung, Fanny M

    2012-11-01

    Despite the "awakening" to the importance of culture in psychology in America, international psychology has remained on the sidelines of psychological science. The author recounts her personal and professional experience in tandem with the stages of development in international/cross-cultural psychology. Based on her research in cross-cultural personality assessment, the author discusses the inadequacies of sole reliance on either the etic or the emic approach and points out the advantages of a combined emic-etic approach in bridging global and local human experiences in psychological science and practice. With the blurring of the boundaries between North American-European psychologies and psychology in the rest of the world, there is a need to mainstream culture in psychology's epistemological paradigm. Borrowing from the concept of gender mainstreaming that embraces both similarities and differences in promoting equal opportunities, the author discusses the parallel needs of acknowledging universals and specifics when mainstreaming culture in psychology. She calls for building a culturally informed universal knowledge base that should be incorporated in the psychology curriculum and textbooks.

  4. Automated Cache Performance Analysis And Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-23

    While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters

  5. Parallel synthesis of a series of potentially brain penetrant aminoalkyl benzoimidazoles.

    Science.gov (United States)

    Micco, Iolanda; Nencini, Arianna; Quinn, Joanna; Bothmann, Hendrick; Ghiron, Chiara; Padova, Alessandro; Papini, Silvia

    2008-03-01

    Alpha7 agonists were identified via GOLD (CCDC) docking in the putative agonist binding site of an alpha7 homology model and a series of aminoalkyl benzoimidazoles was synthesised to obtain potentially brain penetrant drugs. The array was prepared starting from the reaction of ortho-fluoronitrobenzenes with a selection of diamines, followed by reduction of the nitro group to obtain a series of monoalkylated phenylene diamines. N,N'-Carbonyldiimidazole (CDI) mediated acylation, followed by a parallel automated work-up procedure, afforded the monoacylated phenylenediamines which were cyclised under acidic conditions. Parallel work-up and purification afforded the array products in good yields and purities with a robust parallel methodology which will be useful for other libraries. Screening for alpha7 activity revealed compounds with agonist activity for the receptor.

  6. Parallel synthesis of a series of potentially brain penetrant aminoalkyl benzoimidazoles.

    Science.gov (United States)

    Micco, Iolanda; Nencini, Arianna; Quinn, Joanna; Bothmann, Hendrick; Ghiron, Chiara; Padova, Alessandro; Papini, Silvia

    2008-03-01

    Alpha7 agonists were identified via GOLD (CCDC) docking in the putative agonist binding site of an alpha7 homology model and a series of aminoalkyl benzoimidazoles was synthesised to obtain potentially brain penetrant drugs. The array was prepared starting from the reaction of ortho-fluoronitrobenzenes with a selection of diamines, followed by reduction of the nitro group to obtain a series of monoalkylated phenylene diamines. N,N'-Carbonyldiimidazole (CDI) mediated acylation, followed by a parallel automated work-up procedure, afforded the monoacylated phenylenediamines which were cyclised under acidic conditions. Parallel work-up and purification afforded the array products in good yields and purities with a robust parallel methodology which will be useful for other libraries. Screening for alpha7 activity revealed compounds with agonist activity for the receptor. PMID:18078760

  7. Applied Parallel Computing Industrial Computation and Optimization

    DEFF Research Database (Denmark)

    Madsen, Kaj; NA NA NA Olesen, Dorte

    Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)......Proceedings and the Third International Workshop on Applied Parallel Computing in Industrial Problems and Optimization (PARA96)...

  8. Parallel kinematics type, kinematics, and optimal design

    CERN Document Server

    Liu, Xin-Jun

    2014-01-01

    Parallel Kinematics- Type, Kinematics, and Optimal Design presents the results of 15 year's research on parallel mechanisms and parallel kinematics machines. This book covers the systematic classification of parallel mechanisms (PMs) as well as providing a large number of mechanical architectures of PMs available for use in practical applications. It focuses on the kinematic design of parallel robots. One successful application of parallel mechanisms in the field of machine tools, which is also called parallel kinematics machines, has been the emerging trend in advanced machine tools. The book describes not only the main aspects and important topics in parallel kinematics, but also references novel concepts and approaches, i.e. type synthesis based on evolution, performance evaluation and optimization based on screw theory, singularity model taking into account motion and force transmissibility, and others.   This book is intended for researchers, scientists, engineers and postgraduates or above with interes...

  9. Parallel String Matching Algorithm Using Grid

    Directory of Open Access Journals (Sweden)

    K.M.M Rajashekharaiah

    2012-06-01

    Full Text Available Grid computing provides solutions for various complex problems. Implementing computing intensive applications such as string matching problem on grid is inevitable. In this paper we present a new method for exact string matching based on grid computing. The function of the grid is to parallelize the string matching problem using grid MPI parallel programming method or loosely coupled parallel services on Grid. There can be dedicated grid, other way it can be resource sharing among the existingnetwork of computing resources, we propose dedicated grid. Parallel applications fall under three categories namely Perfect parallelism, Data parallelism and Functional parallelism, we use data parallelism, and it is also called Single Program Multiple Data (SPMD method, where given data is divided into several parts and working on the part simultaneously. This improves the executing time ofthe string matching algorithms on grid. The simulation results show significant improvement in executing time and speed up.

  10. Lock-Free Parallel Access Collections

    Directory of Open Access Journals (Sweden)

    Bruce P. Lester

    2014-06-01

    Full Text Available All new computers have multicore processors. To exploit this hardware parallelism for improved performance, the predominant approach today is multithreading using shared variables and locks. This approach has potential data races that can create a nondeterministic program. This paper presents a promising new approach to parallel programming that is both lock-free and deterministic. The standard forall primitive for parallel execution of for-loop iterations is extended into a more highly structured primitive called a Parallel Operation (POP. Each parallel process created by a POP may read shared variables (or shared collections freely. Shared collections modified by a POP must be selected from a special set of predefined Parallel Access Collections (PAC. Each PAC has several Write Modes that govern parallel updates in a deterministic way. This paper presents an overview of a Prototype Library that implements this POP-PAC approach for the C++ language, including performance results for two benchmark parallel programs.

  11. Overview of the Force Scientific Parallel Language

    Directory of Open Access Journals (Sweden)

    Gita Alaghband

    1994-01-01

    Full Text Available The Force parallel programming language designed for large-scale shared-memory multiprocessors is presented. The language provides a number of parallel constructs as extensions to the ordinary Fortran language and is implemented as a two-level macro preprocessor to support portability across shared memory multiprocessors. The global parallelism model on which the Force is based provides a powerful parallel language. The parallel constructs, generic synchronization, and freedom from process management supported by the Force has resulted in structured parallel programs that are ported to the many multiprocessors on which the Force is implemented. Two new parallel constructs for looping and functional decomposition are discussed. Several programming examples to illustrate some parallel programming approaches using the Force are also presented.

  12. Bisection technique for designing synchronous parallel algorithms

    Institute of Scientific and Technical Information of China (English)

    王能超

    1995-01-01

    A basic technique for designing synchronous parallel algorithms, the so-called bisection technique, is proposed. The basic pattern of designing parallel algorithms is described. The relationship between the designing idea and I Ching (principles of change) is discussed.

  13. Parallel Imports and Music CD Prices

    OpenAIRE

    Yeh-ning Chen; Png, I. P. L.

    2004-01-01

    Parallel imports are a significant academic and policy issue. Official investigations into the impact of parallel imports on music CD prices have reached widely conflicting conclusions. This note reports an event study on an international panel of changes in copyright law to permit or disallow parallel imports. The study shows that, on average, legalization of parallel imports was associated with a 7.2–7.9% reduction in the retail price of music CDs.

  14. Parallel machine architecture and compiler design facilities

    Science.gov (United States)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  15. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to . This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  16. The parallel volume at large distances

    DEFF Research Database (Denmark)

    Kampf, Jürgen

    In this paper we examine the asymptotic behavior of the parallel volume of planar non-convex bodies as the distance tends to infinity. We show that the difference between the parallel volume of the convex hull of a body and the parallel volume of the body itself tends to 0. This yields a new proof...... for the fact that a planar body can only have polynomial parallel volume, if it is convex. Extensions to Minkowski spaces and random sets are also discussed....

  17. Using Parallel Platforms as Climbing Robots

    OpenAIRE

    Reinoso, Oscar; Aracil, Rafael; Saltaren, Roque

    2006-01-01

    At present, parallel robots show a great progress in their development due to their behaviour in multiple applications. In this sense, the Stewart-Gough platform with proper mechanical adaptations could be used for a climbing parallel robot. A climbing parallel robot with 6 degrees of freedom has been proposed and analysed. Parallel robots have great advantages compared to serial robots with legs using as climbing robots. Some advantages can be cited as the high weight payload capacity, robus...

  18. Classical MD calculations with parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, Mitsuhiro [Nagoya Univ. (Japan)

    1998-03-01

    We have developed parallel computation codes for a classical molecular dynamics (MD) method. In order to use them on work station clusters as well as parallel super computers, we use MPI (message passing interface) library for distributed-memory type computers. Two algorithms are compared: (1) particle parallelism technique: easy to install, effective for rather small number of processors. (2) region parallelism technique: take some time to install, effective even for many nodes. (J.P.N.)

  19. Force user's manual: A portable, parallel FORTRAN

    Science.gov (United States)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  20. Industrial cultures

    DEFF Research Database (Denmark)

    Rasmussen, Lauge Baungaard

    1996-01-01

    The chapter deals with different paradigms andtheories of cultural development. The problem toexplain change and methods to analyse developmentin different cultures are presented and discussed.......The chapter deals with different paradigms andtheories of cultural development. The problem toexplain change and methods to analyse developmentin different cultures are presented and discussed....

  1. Scalable Parallel Algebraic Multigrid Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Bank, R; Lu, S; Tong, C; Vassilevski, P

    2005-03-23

    The authors propose a parallel algebraic multilevel algorithm (AMG), which has the novel feature that the subproblem residing in each processor is defined over the entire partition domain, although the vast majority of unknowns for each subproblem are associated with the partition owned by the corresponding processor. This feature ensures that a global coarse description of the problem is contained within each of the subproblems. The advantages of this approach are that interprocessor communication is minimized in the solution process while an optimal order of convergence rate is preserved; and the speed of local subproblem solvers can be maximized using the best existing sequential algebraic solvers.

  2. Unparallel View of Parallel Economy

    OpenAIRE

    Batabyal, Partha

    2009-01-01

    Despite the considerable media attention being given to the financial meltdown there has been little attention on why this happened. This paper investigates into this matter and takes a very unique point of view. Whenever we talk about this matter we thought of liquidity crunch, sub prime crisis, high fiscal and trade deficit etc but we never think about parallel economy and money laundering. Here are some facts regarding this as-pect: the amount of money laundered in the whole world is close...

  3. Parallel execution of portfolio optimization

    CERN Document Server

    Nuriyev, R

    2008-01-01

    Analysis of asset liability management (ALM) strategies especially for long term horizon is a crucial issue for banks, funds and insurance companies. Modern economic models, investment strategies and optimization criteria make ALM studies computationally very intensive task. It attracts attention to multiprocessor system and especially to the cheapest one: multi core PCs and PC clusters. In this article we are analyzing problem of parallel organization of portfolio optimization, results of using clusters for optimization and the most efficient cluster architecture for these kinds of tasks.

  4. Parallel algorithms and cluster computing

    CERN Document Server

    Hoffmann, Karl Heinz

    2007-01-01

    This book presents major advances in high performance computing as well as major advances due to high performance computing. It contains a collection of papers in which results achieved in the collaboration of scientists from computer science, mathematics, physics, and mechanical engineering are presented. From the science problems to the mathematical algorithms and on to the effective implementation of these algorithms on massively parallel and cluster computers we present state-of-the-art methods and technology as well as exemplary results in these fields. This book shows that problems which seem superficially distinct become intimately connected on a computational level.

  5. Parallel Computation Of Forward Dynamics Of Manipulators

    Science.gov (United States)

    Fijany, Amir; Bejczy, Antal K.

    1993-01-01

    Report presents parallel algorithms and special parallel architecture for computation of forward dynamics of robotics manipulators. Products of effort to find best method of parallel computation to achieve required computational efficiency. Significant speedup of computation anticipated as well as cost reduction.

  6. Reservoir Thermal Recover Simulation on Parallel Computers

    Science.gov (United States)

    Li, Baoyan; Ma, Yuanle

    The rapid development of parallel computers has provided a hardware background for massive refine reservoir simulation. However, the lack of parallel reservoir simulation software has blocked the application of parallel computers on reservoir simulation. Although a variety of parallel methods have been studied and applied to black oil, compositional, and chemical model numerical simulations, there has been limited parallel software available for reservoir simulation. Especially, the parallelization study of reservoir thermal recovery simulation has not been fully carried out, because of the complexity of its models and algorithms. The authors make use of the message passing interface (MPI) standard communication library, the domain decomposition method, the block Jacobi iteration algorithm, and the dynamic memory allocation technique to parallelize their serial thermal recovery simulation software NUMSIP, which is being used in petroleum industry in China. The parallel software PNUMSIP was tested on both IBM SP2 and Dawn 1000A distributed-memory parallel computers. The experiment results show that the parallelization of I/O has great effects on the efficiency of parallel software PNUMSIP; the data communication bandwidth is also an important factor, which has an influence on software efficiency. Keywords: domain decomposition method, block Jacobi iteration algorithm, reservoir thermal recovery simulation, distributed-memory parallel computer

  7. Parallel Computing Using Web Servers and "Servlets".

    Science.gov (United States)

    Lo, Alfred; Bloor, Chris; Choi, Y. K.

    2000-01-01

    Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…

  8. A Framework for Parallel Programming in Java

    OpenAIRE

    Launay, Pascale; Pazat, Jean-Louis

    1997-01-01

    To ease the task of programming parallel and distributed applications, the Do! project aims at the automatic generation of distributed code from multi-threaded Java programs. We provide a parallel programming model, embedded in a framework that constraints parallelism without any extension to the Java language. This framework is described here and is used as a basis to generate distributed programs.

  9. A Parallel Approach to Fractal Image Compression

    Directory of Open Access Journals (Sweden)

    Lubomir Dedera

    2004-01-01

    Full Text Available The paper deals with a parallel approach to coding and decoding algorithms in fractal image compressionand presents experimental results comparing sequential and parallel algorithms from the point of view of achieved bothcoding and decoding time and effectiveness of parallelization.

  10. Laboratory automation and LIMS in forensics

    DEFF Research Database (Denmark)

    Stangegaard, Michael; Hansen, Anders Johannes; Morling, Niels

    2013-01-01

    Implementation of laboratory automation and LIMS in a forensic laboratory enables the laboratory, to standardize sample processing. Automated liquid handlers can increase throughput and eliminate manual repetitive pipetting operations, known to result in occupational injuries to the technical staff....... Furthermore, implementation of automated liquid handlers reduces the risk of sample misplacement. A LIMS can efficiently control the sample flow through the laboratory and manage the results of the conducted tests for each sample. Integration of automated liquid handlers with a LIMS provides the laboratory...... with the tools required for setting up automated production lines of complex laboratory processes and monitoring the whole process and the results. Combined, this enables processing of a large number of samples. Selection of the best automated solution for an individual laboratory should be based on user...

  11. Modern trends in automation of industrial complexes

    OpenAIRE

    Svjatnyj, Vladimir A.; Brovkina, Daniella Yu.

    2016-01-01

    This paper is dedicated to the study of the transition of manufacturing to a new stage of its development due to the introduction of the latest computer technologies, that have become common in the service sector, into the automation process. One of the characteristic features of the development of modern industry is the integration of the achievements of the theory and practice of automation, information technology, robotics and "human–automated object" systems. "Industry 4.0" was designed a...

  12. LALPC: Exploiting Parallelism from FPGAs Using C Language

    Science.gov (United States)

    Porto, Lucas F.; Fernandes, Marcio M.; Bonato, Vanderlei; Menotti, Ricardo

    2015-10-01

    This paper presents LALPC, a prototype high-level synthesis tool, specialized in hardware generation for loop-intensive code segments. As demonstrated in a previous work, the underlying hardware components target by LALPC are highly specialized for loop pipeline execution, resulting in efficient implementations, both in terms of performance and resources usage (silicon area). LALPC extends the functionality of a previous tool by using a subset of the C language as input code to describe computations, improving the usability and potential acceptance of the technique among developers. LALPC also enhances parallelism exploitation by applying loop unrolling, and providing support for automatic generation and scheduling of parallel memory accesses. The combination of using the C language to automate the process of hardware design, with an efficient underlying scheme to support loop pipelining, constitutes the main goal and contribution of the work described in this paper. Experimental results have shown the effectiveness of those techniques to enhance performance, and also exemplifies how some of the LALPC compiler features may support performance-resources trade-off analysis tasks.

  13. Parallelisms and revelatory concepts of the Johannine Prologue in Greco-Roman context

    Directory of Open Access Journals (Sweden)

    Benno Zuiddam

    2016-04-01

    Full Text Available This article builds on the increasing recognition of divine communication and God’s plan as a central concept in the prologue to the Fourth gospel. A philological analysis reveals parallel structures with an emphasis on divine communication in which the Logos takes a central part. These should be understood within the context of this gospel, but have their roots in the Old Testament. The Septuagint offers parallel concepts, particularly in its wisdom literature. Apart from these derivative parallels, the revelatory concepts and terminology involved in John 1:1–18, also find functional parallels in the historical environment of the fourth gospel. They share similarities with the role of Apollo Phoebus in the traditionally assigned geographical context of the region of Ephesus in Asia Minor. This functional parallelism served the reception of John’s biblical message in a Greco-Roman cultural setting.Keywords: John's Gospel; Apollo Phoebus; Logos; Revelation; Ephesus

  14. Parallel Bit Interleaved Coded Modulation

    CERN Document Server

    Ingber, Amir

    2010-01-01

    A new variant of bit interleaved coded modulation (BICM) is proposed. In the new scheme, called Parallel BICM, L identical binary codes are used in parallel using a mapper, a newly proposed finite-length interleaver and a binary dither signal. As opposed to previous approaches, the scheme does not rely on any assumptions of an ideal, infinite-length interleaver. Over a memoryless channel, the new scheme is proven to be equivalent to a binary memoryless channel. Therefore the scheme enables one to easily design coded modulation schemes using a simple binary code that was designed for that binary channel. The overall performance of the coded modulation scheme is analytically evaluated based on the performance of the binary code over the binary channel. The new scheme is analyzed from an information theoretic viewpoint, where the capacity, error exponent and channel dispersion are considered. The capacity of the scheme is identical to the BICM capacity. The error exponent of the scheme is numerically compared to...

  15. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  16. Xyce parallel electronic simulator design.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  17. Flexible End2End Workflow Automation of Hit-Discovery Research.

    Science.gov (United States)

    Holzmüller-Laue, Silke; Göde, Bernd; Thurow, Kerstin

    2014-08-01

    The article considers a new approach of more complex laboratory automation at the workflow layer. The authors purpose the automation of end2end workflows. The combination of all relevant subprocesses-whether automated or manually performed, independently, and in which organizational unit-results in end2end processes that include all result dependencies. The end2end approach focuses on not only the classical experiments in synthesis or screening, but also on auxiliary processes such as the production and storage of chemicals, cell culturing, and maintenance as well as preparatory activities and analyses of experiments. Furthermore, the connection of control flow and data flow in the same process model leads to reducing of effort of the data transfer between the involved systems, including the necessary data transformations. This end2end laboratory automation can be realized effectively with the modern methods of business process management (BPM). This approach is based on a new standardization of the process-modeling notation Business Process Model and Notation 2.0. In drug discovery, several scientific disciplines act together with manifold modern methods, technologies, and a wide range of automated instruments for the discovery and design of target-based drugs. The article discusses the novel BPM-based automation concept with an implemented example of a high-throughput screening of previously synthesized compound libraries. PMID:24464814

  18. Automated Contingency Management for Propulsion Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — Increasing demand for improved reliability and survivability of mission-critical systems is driving the development of health monitoring and Automated Contingency...

  19. Rapid Automated Mission Planning System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed innovation is an automated UAS mission planning system that will rapidly identify emergency (contingency) landing sites, manage contingency routing,...

  20. ORGANIZATIONAL CULTURE AND MANAGEMENT CULTURE

    OpenAIRE

    Tudor Hobeanu; Loredana Vacarescu Hobeanu

    2010-01-01

    Communication reveals the importance of organizational culture and management culture supported by the remarkable results in economic and social level of organization. Their functions are presented and specific ways of expression levels of organizational culture and ways of adapting to the requirements of the organization's management culture.

  1. Special Effect of Parallel Inductive Electric Field

    Institute of Scientific and Technical Information of China (English)

    陈涛; 刘振兴; W.Heikkila

    2002-01-01

    Acceleration of electrons by a field-aligned electric field during a magnetospheric substorm in the deep geomagnetic tail is studied by means of a one-dimensional electromagnetic particle code. It was found that the free acceleration of the electrons by the parallel electric field is obvious; kinetic energy variation is greater than electromagnetic energy variation in the presence of parallel electric field. Magnetic energy is greater than kinetic energy variation and electric energy variation in the absence of the parallel electric field. More wave modes in the presence of the parallel electric field are generated than those in the absence of the parallel electric field.

  2. PDDP, A Data Parallel Programming Model

    Directory of Open Access Journals (Sweden)

    Karen H. Warren

    1996-01-01

    Full Text Available PDDP, the parallel data distribution preprocessor, is a data parallel programming model for distributed memory parallel computers. PDDP implements high-performance Fortran-compatible data distribution directives and parallelism expressed by the use of Fortran 90 array syntax, the FORALL statement, and the WHERE construct. Distributed data objects belong to a global name space; other data objects are treated as local and replicated on each processor. PDDP allows the user to program in a shared memory style and generates codes that are portable to a variety of parallel machines. For interprocessor communication, PDDP uses the fastest communication primitives on each platform.

  3. Smart Home Automation with Linux

    CERN Document Server

    Goodwin, Steven

    2010-01-01

    Linux users can now control their homes remotely! Are you a Linux user who has ever wanted to turn on the lights in your house, or open and close the curtains, while away on holiday? Want to be able to play the same music in every room, controlled from your laptop or mobile phone? Do you want to do these things without an expensive off-the-shelf kit? In Beginning Linux Home Automation, Steven Goodwin will show you how a house can be fully controlled by its occupants, all using open source software. From appliances to kettles to curtains, control your home remotely! What you'll learn* Control a

  4. Ball Bearing Stacking Automation System

    Directory of Open Access Journals (Sweden)

    Shafeequerrahman S . Ahmed

    2013-01-01

    Full Text Available This document is an effort to introduce the concept of automation in small scale industries and or small workshops that are involved in the manufacturing of small objects such as nuts, bolts and ball bearing in this case. This an electromechanical system which includes certain mechanical parts that involves one base stand on which one vertical metallic frame is mounted and hinged to this vertical stand is an in humanized effort seems inadequate in this era making necessary the use of Electronics, Computer in the manufacturing processes leading to the concept of Automated Manufacturing System (AMS.The ball bearing stack automation is an effort in this regard. In our project we go for stack automation for any object for example a ball bearing, be that is still a manual system there. It will be microcontroller based project control system equipped with microcontroller 89C51 from any manufacturer like Atmel or Philips. This could have been easily implemented if a PLC could be used for manufacturing the staking unit but I adopted the microcontroller based system so that some more modification in the system can be effected at will as to use the same hardware .Although a very small object i.e. ball bearig or small nut and fixture will be tried to be stacked, the system with more precision and more power handling capacity could be built for various requirements of the industry. For increasing more control capacity, we can use another module of this series. When the bearing is ready, it will be sent for packing. This is sensed by an inductive sensor. The output will be proceeds by PLC and microcontroller card which will be driving the assembly in order to put it into pads or flaps. This project will also count the total number of bearings to be packed and will display it on a LCD for real time reference and a provision is made using a higher level language using hyper terminal of the computer

  5. Robotium automated testing for Android

    CERN Document Server

    Zadgaonkar, Hrushikesh

    2013-01-01

    This is a step-by-step, example-oriented tutorial aimed at illustrating the various test scenarios and automation capabilities of Robotium.If you are an Android developer who is learning how to create test cases to test their application, and are looking to get a good grounding in different features in Robotium, this book is ideal for you. It's assumed that you have some experience in Android development, as well be familiar with the Android test framework, as Robotium is a wrapper to Android test framework.

  6. AUTOMATED TESTING OF OPC SERVERS

    CERN Document Server

    Farnham, B

    2011-01-01

    CERN relies on OPC Server implementations from 3rd party device vendors to provide a software interface to their respective hardware. Each time a vendor releases a new OPC Server version it is regression tested internally to verify that existing functionality has not been inadvertently broken during the process of adding new features. In addition bugs and problems must be communicated to the vendors in a reliable and portable way. This presentation covers the automated test approach used at CERN to cover both cases: Scripts are written in a domain specific language specifically created for describing OPC tests and executed by a custom software engine driving the OPC Server implementation.

  7. Automated Orientation of Aerial Images

    DEFF Research Database (Denmark)

    Høhle, Joachim

    2002-01-01

    Methods for automated orientation of aerial images are presented. They are based on the use of templates, which are derived from existing databases, and area-based matching. The characteristics of available database information and the accuracy requirements for map compilation and orthoimage...... production are discussed on the example of Denmark. Details on the developed methods for interior and exterior orientation are described. Practical examples like the measurement of réseau images, updating of topographic databases and renewal of orthoimages are used to prove the feasibility of the developed...

  8. Automated visual choice discrimination learning in zebrafish (Danio rerio).

    Science.gov (United States)

    Mueller, Kaspar P; Neuhauss, Stephan C F

    2012-03-01

    Training experimental animals to discriminate between different visual stimuli has been an important tool in cognitive neuroscience as well as in vision research for many decades. Current methods used for visual choice discrimination training of zebrafish require human observers for response tracking, stimulus presentation and reward delivery and, consequently, are very labor intensive and possibly experimenter biased. By combining video tracking of fish positions, stimulus presentation on computer monitors and food delivery by computer-controlled electromagnetic valves, we developed a method that allows for a fully automated training of multiple adult zebrafish to arbitrary visual stimuli in parallel. The standardized training procedure facilitates the comparison of results across different experiments and laboratories and contributes to the usability of zebrafish as vertebrate model organisms in behavioral brain research and vision research. PMID:22744784

  9. VLSI implementation of moment invariants for automated inspection

    Science.gov (United States)

    Armstrong, G. A.; Simpson, M. L.; Bouldin, D. W.

    This paper describes the design of a very large scale integration (VLSI) application specific integrated circuit (ASIC) for use in automated inspection. The inspection scheme uses Hu and Maitra's algorithms for moment invariants. A prototype design was generated that resolved the long delay time of the multiplier by custom designing adder cells based on the Manchester carry chain. The prototype ASIC is currently being fabricated in 2.0-micron CMOS technology and has been simulated at 20 MHz. The final ASICs will be used in parallel at the board level to achieve the 230 MOPs necessary to perform the moment invariant algorithms in real time on 512 by 512 pixel images with 256 grey scales.

  10. Systematic approach for deriving feasible mappings of parallel algorithms to parallel computing platforms

    NARCIS (Netherlands)

    Arkin, Ethem; Tekinerdogan, Bedir; Imre, Kayhan M.

    2016-01-01

    The need for high-performance computing together with the increasing trend from single processor to parallel computer architectures has leveraged the adoption of parallel computing. To benefit from parallel computing power, usually parallel algorithms are defined that can be mapped and executed o

  11. Parallels plane projection and its geometric features

    Institute of Scientific and Technical Information of China (English)

    ZHOU ChengHu; MA Ting; YANG Liao; QIN Biao

    2007-01-01

    A new equivalent map projection called the parallels plane projection is proposed in this paper. The transverse axis of the parallels plane projection is the expansion of the equator and its vertical axis equals half the length of the central meridian. On the parallels plane projection, meridians are projected as sine curves and parallels are a series of straight, parallel lines. No distortion of length occurs along the central meridian or on any parallels of this projection. Angular distortion and the proportion of length along meridians (except the central meridian) introduced by the projection transformation increase with increasing longitude and latitude. A potential application of the parallels plane projection is that it can provide an efficient projection transformation for global discrete grid systems.

  12. Experiences in Data-Parallel Programming

    Directory of Open Access Journals (Sweden)

    Terry W. Clark

    1997-01-01

    Full Text Available To efficiently parallelize a scientific application with a data-parallel compiler requires certain structural properties in the source program, and conversely, the absence of others. A recent parallelization effort of ours reinforced this observation and motivated this correspondence. Specifically, we have transformed a Fortran 77 version of GROMOS, a popular dusty-deck program for molecular dynamics, into Fortran D, a data-parallel dialect of Fortran. During this transformation we have encountered a number of difficulties that probably are neither limited to this particular application nor do they seem likely to be addressed by improved compiler technology in the near future. Our experience with GROMOS suggests a number of points to keep in mind when developing software that may at some time in its life cycle be parallelized with a data-parallel compiler. This note presents some guidelines for engineering data-parallel applications that are compatible with Fortran D or High Performance Fortran compilers.

  13. 78 FR 44142 - Modification of Two National Customs Automation Program (NCAP) Tests Concerning Automated...

    Science.gov (United States)

    2013-07-23

    ... Automation Program (NCAP) test called the Document Image System (DIS) test. See 77 FR 20835. The DIS test... Automation Program Test of Automated Manifest Capabilities for Ocean and Rail Carriers: 76 FR 42721 (July 19... (SE test). See 76 FR 69755. The SE test established new entry capability to simplify the entry...

  14. Massachusetts Library Automation Survey: A Directory of Automated Operations in Massachusetts Libraries.

    Science.gov (United States)

    Stephens, Eileen; Nijenberg, Caroline

    This directory is designed to provide information on automated systems and/or equipment used in libraries to provide a tool for planning future automation in the context of interlibrary cooperation considerations, and to inform the library and information community of the state of the art of automation in Massachusetts libraries. The main body is…

  15. Automatic generation of executable communication specifications from parallel applications

    Energy Technology Data Exchange (ETDEWEB)

    Pakin, Scott [Los Alamos National Laboratory; Wu, Xing [NCSU; Mueller, Frank [NCSU

    2011-01-19

    Portable parallel benchmarks are widely used and highly effective for (a) the evaluation, analysis and procurement of high-performance computing (HPC) systems and (b) quantifying the potential benefits of porting applications for new hardware platforms. Yet, past techniques to synthetically parameterized hand-coded HPC benchmarks prove insufficient for today's rapidly-evolving scientific codes particularly when subject to multi-scale science modeling or when utilizing domain-specific libraries. To address these problems, this work contributes novel methods to automatically generate highly portable and customizable communication benchmarks from HPC applications. We utilize ScalaTrace, a lossless, yet scalable, parallel application tracing framework to collect selected aspects of the run-time behavior of HPC applications, including communication operations and execution time, while abstracting away the details of the computation proper. We subsequently generate benchmarks with identical run-time behavior from the collected traces. A unique feature of our approach is that we generate benchmarks in CONCEPTUAL, a domain-specific language that enables the expression of sophisticated communication patterns using a rich and easily understandable grammar yet compiles to ordinary C + MPI. Experimental results demonstrate that the generated benchmarks are able to preserve the run-time behavior - including both the communication pattern and the execution time - of the original applications. Such automated benchmark generation is particularly valuable for proprietary, export-controlled, or classified application codes: when supplied to a third party. Our auto-generated benchmarks ensure performance fidelity but without the risks associated with releasing the original code. This ability to automatically generate performance-accurate benchmarks from parallel applications is novel and without any precedence, to our knowledge.

  16. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  17. Automated microaneurysm detection in diabetic retinopathy using curvelet transform.

    Science.gov (United States)

    Ali Shah, Syed Ayaz; Laude, Augustinus; Faye, Ibrahima; Tang, Tong Boon

    2016-10-01

    Microaneurysms (MAs) are known to be the early signs of diabetic retinopathy (DR). An automated MA detection system based on curvelet transform is proposed for color fundus image analysis. Candidates of MA were extracted in two parallel steps. In step one, blood vessels were removed from preprocessed green band image and preliminary MA candidates were selected by local thresholding technique. In step two, based on statistical features, the image background was estimated. The results from the two steps allowed us to identify preliminary MA candidates which were also present in the image foreground. A collection set of features was fed to a rule-based classifier to divide the candidates into MAs and non-MAs. The proposed system was tested with Retinopathy Online Challenge database. The automated system detected 162 MAs out of 336, thus achieved a sensitivity of 48.21% with 65 false positives per image. Counting MA is a means to measure the progression of DR. Hence, the proposed system may be deployed to monitor the progression of DR at early stage in population studies. PMID:26868326

  18. Inter-process handling automating system; Koteikan handling jidoka system

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, H. [Meidensha Corp., Tokyo (Japan)

    1994-10-18

    This paper introduces automation of loading works in production site by using robots. Loading robots are required of complex movements, and are used for loading work in processing machines requiring six degrees of freedom and for relatively simple palletizing work that can be dealt with by four degrees of freedom. The `inter-machine handling system` is an automated system performed by a ceiling running robot in which different workpiece model determination and positional shift measurement are carried out by image processing. A robot uses the image information to exchange hands automatically as required, and clamp a workpiece; then runs to an M/C to replace the processed workpiece; and put the M/C processes workpiece onto a multi-axial dedicated machine. Five processing machines are operated in parallel with the cycle time matched with that of this handling process, and a processing machine finished of processing is given a handling work in preferential order. As a result, improvement in productivity and elimination of two workers were achieved simultaneously. 6 figs., 5 tabs.

  19. Automated pipelines for spectroscopic analysis

    Science.gov (United States)

    Allende Prieto, C.

    2016-09-01

    The Gaia mission will have a profound impact on our understanding of the structure and dynamics of the Milky Way. Gaia is providing an exhaustive census of stellar parallaxes, proper motions, positions, colors and radial velocities, but also leaves some glaring holes in an otherwise complete data set. The radial velocities measured with the on-board high-resolution spectrograph will only reach some 10 % of the full sample of stars with astrometry and photometry from the mission, and detailed chemical information will be obtained for less than 1 %. Teams all over the world are organizing large-scale projects to provide complementary radial velocities and chemistry, since this can now be done very efficiently from the ground thanks to large and mid-size telescopes with a wide field-of-view and multi-object spectrographs. As a result, automated data processing is taking an ever increasing relevance, and the concept is applying to many more areas, from targeting to analysis. In this paper, I provide a quick overview of recent, ongoing, and upcoming spectroscopic surveys, and the strategies adopted in their automated analysis pipelines.

  20. Cassini Tour Atlas Automated Generation

    Science.gov (United States)

    Grazier, Kevin R.; Roumeliotis, Chris; Lange, Robert D.

    2011-01-01

    During the Cassini spacecraft s cruise phase and nominal mission, the Cassini Science Planning Team developed and maintained an online database of geometric and timing information called the Cassini Tour Atlas. The Tour Atlas consisted of several hundreds of megabytes of EVENTS mission planning software outputs, tables, plots, and images used by mission scientists for observation planning. Each time the nominal mission trajectory was altered or tweaked, a new Tour Atlas had to be regenerated manually. In the early phases of Cassini s Equinox Mission planning, an a priori estimate suggested that mission tour designers would develop approximately 30 candidate tours within a short period of time. So that Cassini scientists could properly analyze the science opportunities in each candidate tour quickly and thoroughly so that the optimal series of orbits for science return could be selected, a separate Tour Atlas was required for each trajectory. The task of manually generating the number of trajectory analyses in the allotted time would have been impossible, so the entire task was automated using code written in five different programming languages. This software automates the generation of the Cassini Tour Atlas database. It performs with one UNIX command what previously took a day or two of human labor.

  1. Automated Car Park Management System

    Science.gov (United States)

    Fabros, J. P.; Tabañag, D.; Espra, A.; Gerasta, O. J.

    2015-06-01

    This study aims to develop a prototype for an Automated Car Park Management System that will increase the quality of service of parking lots through the integration of a smart system that assists motorist in finding vacant parking lot. The research was based on implementing an operating system and a monitoring system for parking system without the use of manpower. This will include Parking Guidance and Information System concept which will efficiently assist motorists and ensures the safety of the vehicles and the valuables inside the vehicle. For monitoring, Optical Character Recognition was employed to monitor and put into list all the cars entering the parking area. All parking events in this system are visible via MATLAB GUI which contain time-in, time-out, time consumed information and also the lot number where the car parks. To put into reality, this system has a payment method, and it comes via a coin slot operation to control the exit gate. The Automated Car Park Management System was successfully built by utilizing microcontrollers specifically one PIC18f4550 and two PIC16F84s and one PIC16F628A.

  2. Parallel network simulations with NEURON.

    Science.gov (United States)

    Migliore, M; Cannia, C; Lytton, W W; Markram, Henry; Hines, M L

    2006-10-01

    The NEURON simulation environment has been extended to support parallel network simulations. Each processor integrates the equations for its subnet over an interval equal to the minimum (interprocessor) presynaptic spike generation to postsynaptic spike delivery connection delay. The performance of three published network models with very different spike patterns exhibits superlinear speedup on Beowulf clusters and demonstrates that spike communication overhead is often less than the benefit of an increased fraction of the entire problem fitting into high speed cache. On the EPFL IBM Blue Gene, almost linear speedup was obtained up to 100 processors. Increasing one model from 500 to 40,000 realistic cells exhibited almost linear speedup on 2,000 processors, with an integration time of 9.8 seconds and communication time of 1.3 seconds. The potential for speed-ups of several orders of magnitude makes practical the running of large network simulations that could otherwise not be explored.

  3. Parallel computing and quantum chromodynamics

    CERN Document Server

    Bowler, K C

    1999-01-01

    The study of Quantum Chromodynamics (QCD) remains one of the most challenging topics in elementary particle physics. The lattice formulation of QCD, in which space-time is treated as a four- dimensional hypercubic grid of points, provides the means for a numerical solution from first principles but makes extreme demands upon computational performance. High Performance Computing (HPC) offers us the tantalising prospect of a verification of QCD through the precise reproduction of the known masses of the strongly interacting particles. It is also leading to the development of a phenomenological tool capable of disentangling strong interaction effects from weak interaction effects in the decays of one kind of quark into another, crucial for determining parameters of the standard model of particle physics. The 1980s saw the first attempts to apply parallel architecture computers to lattice QCD. The SIMD and MIMD machines used in these pioneering efforts were the ICL DAP and the Cosmic Cube, respectively. These wer...

  4. Parallelization of Kinetic Theory Simulations

    CERN Document Server

    Howell, Jim; Colbry, Dirk; Pickett, Rodney; Staber, Alec; Sagert, Irina; Strother, Terrance

    2013-01-01

    Numerical studies of shock waves in large scale systems via kinetic simulations with millions of particles are too computationally demanding to be processed in serial. In this work we focus on optimizing the parallel performance of a kinetic Monte Carlo code for astrophysical simulations such as core-collapse supernovae. Our goal is to attain a flexible program that scales well with the architecture of modern supercomputers. This approach requires a hybrid model of programming that combines a message passing interface (MPI) with a multithreading model (OpenMP) in C++. We report on our approach to implement the hybrid design into the kinetic code and show first results which demonstrate a significant gain in performance when many processors are applied.

  5. Parallel spinors on flat manifolds

    Science.gov (United States)

    Sadowski, Michał

    2006-05-01

    Let p(M) be the dimension of the vector space of parallel spinors on a closed spin manifold M. We prove that every finite group G is the holonomy group of a closed flat spin manifold M(G) such that p(M(G))>0. If the holonomy group Hol(M) of M is cyclic, then we give an explicit formula for p(M) another than that given in [R.J. Miatello, R.A. Podesta, The spectrum of twisted Dirac operators on compact flat manifolds, Trans. Am. Math. Soc., in press]. We answer the question when p(M)>0 if Hol(M) is a cyclic group of prime order or dim⁡M≤4.

  6. Self-testing in parallel

    Science.gov (United States)

    McKague, Matthew

    2016-04-01

    Self-testing allows us to determine, through classical interaction only, whether some players in a non-local game share particular quantum states. Most work on self-testing has concentrated on developing tests for small states like one pair of maximally entangled qubits, or on tests where there is a separate player for each qubit, as in a graph state. Here we consider the case of testing many maximally entangled pairs of qubits shared between two players. Previously such a test was shown where testing is sequential, i.e., one pair is tested at a time. Here we consider the parallel case where all pairs are tested simultaneously, giving considerably more power to dishonest players. We derive sufficient conditions for a self-test for many maximally entangled pairs of qubits shared between two players and also two constructions for self-tests where all pairs are tested simultaneously.

  7. Nanocapillary Adhesion between Parallel Plates.

    Science.gov (United States)

    Cheng, Shengfeng; Robbins, Mark O

    2016-08-01

    Molecular dynamics simulations are used to study capillary adhesion from a nanometer scale liquid bridge between two parallel flat solid surfaces. The capillary force, Fcap, and the meniscus shape of the bridge are computed as the separation between the solid surfaces, h, is varied. Macroscopic theory predicts the meniscus shape and the contribution of liquid/vapor interfacial tension to Fcap quite accurately for separations as small as two or three molecular diameters (1-2 nm). However, the total capillary force differs in sign and magnitude from macroscopic theory for h ≲ 5 nm (8-10 diameters) because of molecular layering that is not included in macroscopic theory. For these small separations, the pressure tensor in the fluid becomes anisotropic. The components in the plane of the surface vary smoothly and are consistent with theory based on the macroscopic surface tension. Capillary adhesion is affected by only the perpendicular component, which has strong oscillations as the molecular layering changes.

  8. Quantum parallelism may be limited

    CERN Document Server

    Ozhigov, Yu I

    2016-01-01

    We consider quantum formalism limited by the classical simulating computer with the fixed memory. The memory is redistributed in the course of modeling by the variation of the set of classical states and the accuracy of the representation of amplitudes. This computational description completely preserves the conventional formalism and does not contradicts to experiments, but it makes impossible fast quantum algorithms. This description involves the slow down of quantum evolutions with the growth of the dimension of the minimal subspace containing entangled states, which arise in the evolution. This slow down is the single difference of the proposed formalism from the standard one; it is negligible for the systems from the usual experiments, including those in which many entangled particle participate, but grows rapidly in the attempt to realize the scalable quantum computations, which require the unlimited parallelism. The experimental verification of this version of quantum formalism is reduced to the fixati...

  9. Parallel computing in enterprise modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  10. INFLUENCE OF IONIC-STRENGTH AND SHEAR RATE ON THE DESORPTION OF POLYSTYRENE PARTICLES FROM A GLASS COLLECTOR AS STUDIED IN A PARALLEL-PLATE FLOW CHAMBER

    NARCIS (Netherlands)

    MEINDERS, JM; BUSSCHER, HJ

    1993-01-01

    The residence-time-dependent desorption during the deposition of polystyrene particles 736 nm in diameter on glass was studied in situ using a parallel-plate flow chamber and automated image analysis. Comparison of successively grabbed images yielded the initial desorption rate coefficient, and fina

  11. Fully Parallel MHD Stability Analysis Tool

    Science.gov (United States)

    Svidzinski, Vladimir; Galkin, Sergei; Kim, Jin-Soo; Liu, Yueqiang

    2015-11-01

    Progress on full parallelization of the plasma stability code MARS will be reported. MARS calculates eigenmodes in 2D axisymmetric toroidal equilibria in MHD-kinetic plasma models. It is a powerful tool for studying MHD and MHD-kinetic instabilities and it is widely used by fusion community. Parallel version of MARS is intended for simulations on local parallel clusters. It will be an efficient tool for simulation of MHD instabilities with low, intermediate and high toroidal mode numbers within both fluid and kinetic plasma models, already implemented in MARS. Parallelization of the code includes parallelization of the construction of the matrix for the eigenvalue problem and parallelization of the inverse iterations algorithm, implemented in MARS for the solution of the formulated eigenvalue problem. Construction of the matrix is parallelized by distributing the load among processors assigned to different magnetic surfaces. Parallelization of the solution of the eigenvalue problem is made by repeating steps of the present MARS algorithm using parallel libraries and procedures. Results of MARS parallelization and of the development of a new fix boundary equilibrium code adapted for MARS input will be reported. Work is supported by the U.S. DOE SBIR program.

  12. Throat Culture

    Science.gov (United States)

    ... limited. Home Visit Global Sites Search Help? Throat Culture Share this page: Was this page helpful? Collecting | ... treatment | Getting results | see BLOOD SAMPLE Collecting A culture is a test that is often used to ...

  13. Repellent Culture.

    Science.gov (United States)

    Carroll, Jeffrey

    2001-01-01

    Considers defining "culture," noting how it is difficult to define because those individuals defining it cannot separate themselves from it. Relates these issues to student writing and their writing improvement. Addresses violence in relation to culture. (SG)

  14. Culturing Protozoa.

    Science.gov (United States)

    Stevenson, Paul

    1980-01-01

    Compares various nutrient media, growth conditions, and stock solutions used in culturing protozoa. A hay infusion in Chalkey's solution maintained at a stable temperature is recommended for producing the most dense and diverse cultures. (WB)

  15. A self-contained, programmable microfluidic cell culture system with real-time microscopy access

    DEFF Research Database (Denmark)

    Skafte-Pedersen, Peder; Hemmingsen, Mette; Sabourin, David;

    2011-01-01

    Utilizing microfluidics is a promising way for increasing the throughput and automation of cell biology research. We present a complete self-contained system for automated cell culture and experiments with real-time optical read-out. The system offers a high degree of user-friendliness, stability...

  16. Do You Automate? Saving Time and Dollars

    Science.gov (United States)

    Carmichael, Christine H.

    2010-01-01

    An automated workforce management strategy can help schools save jobs, improve the job satisfaction of teachers and staff, and free up precious budget dollars for investments in critical learning resources. Automated workforce management systems can help schools control labor costs, minimize compliance risk, and improve employee satisfaction.…

  17. Validation of Automated Scoring of Science Assessments

    Science.gov (United States)

    Liu, Ou Lydia; Rios, Joseph A.; Heilman, Michael; Gerard, Libby; Linn, Marcia C.

    2016-01-01

    Constructed response items can both measure the coherence of student ideas and serve as reflective experiences to strengthen instruction. We report on new automated scoring technologies that can reduce the cost and complexity of scoring constructed-response items. This study explored the accuracy of c-rater-ML, an automated scoring engine…

  18. How to assess sustainability in automated manufacturing

    DEFF Research Database (Denmark)

    Dijkman, Teunis Johannes; Rödger, Jan-Markus; Bey, Niki

    2015-01-01

    The aim of this paper is to describe how sustainability in automation can be assessed. The assessment method is illustrated using a case study of a robot. Three aspects of sustainability assessment in automation are identified. Firstly, we consider automation as part of a larger system that fulfi......The aim of this paper is to describe how sustainability in automation can be assessed. The assessment method is illustrated using a case study of a robot. Three aspects of sustainability assessment in automation are identified. Firstly, we consider automation as part of a larger system......, (sustainability) specifications move top-down, which helps avoiding sub-optimization and problem shifting. From these three aspects, sustainable automation is defined as automation that contributes to products that fulfill a market demand in a more sustainable way. The case study presents the carbon footprints...... of a robot, a production cell, a production line and the final product. The case study results illustrate that, depending on the actor and the level he/she acts at, sustainability and the actions that can be taken to contribute to a more sustainable product are perceived differently: even though the robot...

  19. Ecological Automation Design, Extending Work Domain Analysis

    NARCIS (Netherlands)

    Amelink, M.H.J.

    2010-01-01

    In high–risk domains like aviation, medicine and nuclear power plant control, automation has enabled new capabilities, increased the economy of operation and has greatly contributed to safety. However, automation increases the number of couplings in a system, which can inadvertently lead to more com

  20. Workflow Automation: A Collective Case Study

    Science.gov (United States)

    Harlan, Jennifer

    2013-01-01

    Knowledge management has proven to be a sustainable competitive advantage for many organizations. Knowledge management systems are abundant, with multiple functionalities. The literature reinforces the use of workflow automation with knowledge management systems to benefit organizations; however, it was not known if process automation yielded…