WorldWideScience

Sample records for vlsi design automation

  1. VLSI design

    CERN Document Server

    Basu, D K

    2014-01-01

    Very Large Scale Integrated Circuits (VLSI) design has moved from costly curiosity to an everyday necessity, especially with the proliferated applications of embedded computing devices in communications, entertainment and household gadgets. As a result, more and more knowledge on various aspects of VLSI design technologies is becoming a necessity for the engineering/technology students of various disciplines. With this goal in mind the course material of this book has been designed to cover the various fundamental aspects of VLSI design, like Categorization and comparison between various technologies used for VLSI design Basic fabrication processes involved in VLSI design Design of MOS, CMOS and Bi CMOS circuits used in VLSI Structured design of VLSI Introduction to VHDL for VLSI design Automated design for placement and routing of VLSI systems VLSI testing and testability The various topics of the book have been discussed lucidly with analysis, when required, examples, figures and adequate analytical and the...

  2. VLSI design

    CERN Document Server

    Einspruch, Norman G

    1986-01-01

    VLSI Electronics Microstructure Science, Volume 14: VLSI Design presents a comprehensive exposition and assessment of the developments and trends in VLSI (Very Large Scale Integration) electronics. This volume covers topics that range from microscopic aspects of materials behavior and device performance to the comprehension of VLSI in systems applications. Each article is prepared by a recognized authority. The subjects discussed in this book include VLSI processor design methodology; the RISC (Reduced Instruction Set Computer); the VLSI testing program; silicon compilers for VLSI; and special

  3. VLSI design

    CERN Document Server

    Chandrasetty, Vikram Arkalgud

    2011-01-01

    This book provides insight into the practical design of VLSI circuits. It is aimed at novice VLSI designers and other enthusiasts who would like to understand VLSI design flows. Coverage includes key concepts in CMOS digital design, design of DSP and communication blocks on FPGAs, ASIC front end and physical design, and analog and mixed signal design. The approach is designed to focus on practical implementation of key elements of the VLSI design process, in order to make the topic accessible to novices. The design concepts are demonstrated using software from Mathworks, Xilinx, Mentor Graphic

  4. Handbook of VLSI chip design and expert systems

    CERN Document Server

    Schwarz, A F

    1993-01-01

    Handbook of VLSI Chip Design and Expert Systems provides information pertinent to the fundamental aspects of expert systems, which provides a knowledge-based approach to problem solving. This book discusses the use of expert systems in every possible subtask of VLSI chip design as well as in the interrelations between the subtasks.Organized into nine chapters, this book begins with an overview of design automation, which can be identified as Computer-Aided Design of Circuits and Systems (CADCAS). This text then presents the progress in artificial intelligence, with emphasis on expert systems.

  5. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  6. Computer-aided design of microfluidic very large scale integration (mVLSI) biochips design automation, testing, and design-for-testability

    CERN Document Server

    Hu, Kai; Ho, Tsung-Yi

    2017-01-01

    This book provides a comprehensive overview of flow-based, microfluidic VLSI. The authors describe and solve in a comprehensive and holistic manner practical challenges such as control synthesis, wash optimization, design for testability, and diagnosis of modern flow-based microfluidic biochips. They introduce practical solutions, based on rigorous optimization and formal models. The technical contributions presented in this book will not only shorten the product development cycle, but also accelerate the adoption and further development of modern flow-based microfluidic biochips, by facilitating the full exploitation of design complexities that are possible with current fabrication techniques. Offers the first practical problem formulation for automated control-layer design in flow-based microfluidic biochips and provides a systematic approach for solving this problem; Introduces a wash-optimization method for cross-contamination removal; Presents a design-for-testability (DfT) technique that can achieve 100...

  7. Design automation, languages, and simulations

    CERN Document Server

    Chen, Wai-Kai

    2003-01-01

    As the complexity of electronic systems continues to increase, the micro-electronic industry depends upon automation and simulations to adapt quickly to market changes and new technologies. Compiled from chapters contributed to CRC's best-selling VLSI Handbook, this volume covers a broad range of topics relevant to design automation, languages, and simulations. These include a collaborative framework that coordinates distributed design activities through the Internet, an overview of the Verilog hardware description language and its use in a design environment, hardware/software co-design, syst

  8. Technology computer aided design simulation for VLSI MOSFET

    CERN Document Server

    Sarkar, Chandan Kumar

    2013-01-01

    Responding to recent developments and a growing VLSI circuit manufacturing market, Technology Computer Aided Design: Simulation for VLSI MOSFET examines advanced MOSFET processes and devices through TCAD numerical simulations. The book provides a balanced summary of TCAD and MOSFET basic concepts, equations, physics, and new technologies related to TCAD and MOSFET. A firm grasp of these concepts allows for the design of better models, thus streamlining the design process, saving time and money. This book places emphasis on the importance of modeling and simulations of VLSI MOS transistors and

  9. Compact MOSFET models for VLSI design

    CERN Document Server

    Bhattacharyya, A B

    2009-01-01

    Practicing designers, students, and educators in the semiconductor field face an ever expanding portfolio of MOSFET models. In Compact MOSFET Models for VLSI Design , A.B. Bhattacharyya presents a unified perspective on the topic, allowing the practitioner to view and interpret device phenomena concurrently using different modeling strategies. Readers will learn to link device physics with model parameters, helping to close the gap between device understanding and its use for optimal circuit performance. Bhattacharyya also lays bare the core physical concepts that will drive the future of VLSI.

  10. Multi-valued LSI/VLSI logic design

    Science.gov (United States)

    Santrakul, K.

    A procedure for synthesizing any large complex logic system, such as LSI and VLSI integrated circuits is described. This scheme uses Multi-Valued Multi-plexers (MVMUX) as the basic building blocks and the tree as the structure of the circuit realization. Simple built-in test circuits included in the network (the main current), provide a thorough functional checking of the network at any time. In brief, four major contributions are made: (1) multi-valued Algorithmic State Machine (ASM) chart for describing an LSI/VLSI behavior; (2) a tree-structured multi-valued multiplexer network which can be obtained directly from an ASM chart; (3) a heuristic tree-structured synthesis method for realizing any combinational logic with minimal or nearly-minimal MVMUX; and (4) a hierarchical design of LSI/VLSI with built-in parallel testing capability.

  11. VLSI electronics microstructure science

    CERN Document Server

    1981-01-01

    VLSI Electronics: Microstructure Science, Volume 3 evaluates trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the impact of VLSI on computer architectures; VLSI design and design aid requirements; and design, fabrication, and performance of CCD imagers. The approaches, potential, and progress of ultra-high-speed GaAs VLSI; computer modeling of MOSFETs; and numerical physics of micron-length and submicron-length semiconductor devices are also elaborated. This text likewise covers the optical linewi

  12. Harnessing VLSI System Design with EDA Tools

    CERN Document Server

    Kamat, Rajanish K; Gaikwad, Pawan K; Guhilot, Hansraj

    2012-01-01

    This book explores various dimensions of EDA technologies for achieving different goals in VLSI system design. Although the scope of EDA is very broad and comprises diversified hardware and software tools to accomplish different phases of VLSI system design, such as design, layout, simulation, testability, prototyping and implementation, this book focuses only on demystifying the code, a.k.a. firmware development and its implementation with FPGAs. Since there are a variety of languages for system design, this book covers various issues related to VHDL, Verilog and System C synergized with EDA tools, using a variety of case studies such as testability, verification and power consumption. * Covers aspects of VHDL, Verilog and Handel C in one text; * Enables designers to judge the appropriateness of each EDA tool for relevant applications; * Omits discussion of design platforms and focuses on design case studies; * Uses design case studies from diversified application domains such as network on chip, hospital on...

  13. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  14. Multi-net optimization of VLSI interconnect

    CERN Document Server

    Moiseev, Konstantin; Wimer, Shmuel

    2015-01-01

    This book covers layout design and layout migration methodologies for optimizing multi-net wire structures in advanced VLSI interconnects. Scaling-dependent models for interconnect power, interconnect delay and crosstalk noise are covered in depth, and several design optimization problems are addressed, such as minimization of interconnect power under delay constraints, or design for minimal delay in wire bundles within a given routing area. A handy reference or a guide for design methodologies and layout automation techniques, this book provides a foundation for physical design challenges of interconnect in advanced integrated circuits.  • Describes the evolution of interconnect scaling and provides new techniques for layout migration and optimization, focusing on multi-net optimization; • Presents research results that provide a level of design optimization which does not exist in commercially-available design automation software tools; • Includes mathematical properties and conditions for optimal...

  15. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  16. Design of two easily-testable VLSI array multipliers

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, J.; Shen, J.P.

    1983-01-01

    Array multipliers are well-suited to VLSI implementation because of the regularity in their iterative structure. However, most VLSI circuits are very difficult to test. This paper shows that, with appropriate cell design, array multipliers can be designed to be very easily testable. An array multiplier is called c-testable if all its adder cells can be exhaustively tested while requiring only a constant number of test patterns. The testability of two well-known array multiplier structures are studied. The conventional design of the carry-save array multipler is shown to be not c-testable. However, a modified design, using a modified adder cell, is generated and shown to be c-testable and requires only 16 test patterns. Similar results are obtained for the baugh-wooley two's complement array multiplier. A modified design of the baugh-wooley array multiplier is shown to be c-testable and requires 55 test patterns. The implementation of a practical c-testable 16*16 array multiplier is also presented. 10 references.

  17. Advanced symbolic analysis for VLSI systems methods and applications

    CERN Document Server

    Shi, Guoyong; Tlelo Cuautle, Esteban

    2014-01-01

    This book provides comprehensive coverage of the recent advances in symbolic analysis techniques for design automation of nanometer VLSI systems. The presentation is organized in parts of fundamentals, basic implementation methods and applications for VLSI design. Topics emphasized include  statistical timing and crosstalk analysis, statistical and parallel analysis, performance bound analysis and behavioral modeling for analog integrated circuits . Among the recent advances, the Binary Decision Diagram (BDD) based approaches are studied in depth. The BDD-based hierarchical symbolic analysis approaches, have essentially broken the analog circuit size barrier. In particular, this book   • Provides an overview of classical symbolic analysis methods and a comprehensive presentation on the modern  BDD-based symbolic analysis techniques; • Describes detailed implementation strategies for BDD-based algorithms, including the principles of zero-suppression, variable ordering and canonical reduction; • Int...

  18. vPELS: An E-Learning Social Environment for VLSI Design with Content Security Using DRM

    Science.gov (United States)

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2014-01-01

    This article provides a proposal for personal e-learning system (vPELS [where "v" stands for VLSI: very large scale integrated circuit])) architecture in the context of social network environment for VLSI Design. The main objective of vPELS is to develop individual skills on a specific subject--say, VLSI--and share resources with peers.…

  19. VLSI Design with Alliance Free CAD Tools: an Implementation Example

    Directory of Open Access Journals (Sweden)

    Chávez-Bracamontes Ramón

    2015-07-01

    Full Text Available This paper presents the methodology used for a digital integrated circuit design that implements the communication protocol known as Serial Peripheral Interface, using the Alliance CAD System. The aim of this paper is to show how the work of VLSI design can be done by graduate and undergraduate students with minimal resources and experience. The physical design was sent to be fabricated using the CMOS AMI C5 process that features 0.5 micrometer in transistor size, sponsored by the MOSIS Educational Program. Tests were made on a platform that transfers data from inertial sensor measurements to the designed SPI chip, which in turn sends the data back on a parallel bus to a common microcontroller. The results show the efficiency of the employed methodology in VLSI design, as well as the feasibility of ICs manufacturing from school projects that have insufficient or no source of funding

  20. VLSI Design of Trusted Virtual Sensors

    Directory of Open Access Journals (Sweden)

    Macarena C. Martínez-Rodríguez

    2018-01-01

    Full Text Available This work presents a Very Large Scale Integration (VLSI design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF based on a Static Random Access Memory (SRAM to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time.

  1. VLSI Design of Trusted Virtual Sensors.

    Science.gov (United States)

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  2. VLSI top-down design based on the separation of hierarchies

    NARCIS (Netherlands)

    Spaanenburg, L.; Broekema, A.; Leenstra, J.; Huys, C.

    1986-01-01

    Despite the presence of structure, interactions between the three views on VLSI design still lead to lengthy iterations. By separating the hierarchies for the respective views, the interactions are reduced. This separated hierarchy allows top-down design with functional abstractions as exemplified

  3. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  4. Digital VLSI design with Verilog a textbook from Silicon Valley Technical Institute

    CERN Document Server

    Williams, John

    2008-01-01

    This unique textbook is structured as a step-by-step course of study along the lines of a VLSI IC design project. In a nominal schedule of 12 weeks, two days and about 10 hours per week, the entire verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer - deserializer, including synthesizable PLLs. Digital VLSI Design With Verilog is all an engineer needs for in-depth understanding of the verilog language: Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided on the

  5. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  6. DPL/Daedalus design environment (for VLSI)

    Energy Technology Data Exchange (ETDEWEB)

    Batali, J; Mayle, N; Shrobe, H; Sussman, G; Weise, D

    1981-01-01

    The DPL/Daedalus design environment is an interactive VLSI design system implemented at the MIT Artificial Intelligence Laboratory. The system consists of several components: a layout language called DPL (for design procedure language); an interactive graphics facility (Daedalus); and several special purpose design procedures for constructing complex artifacts such as PLAs and microprocessor data paths. Coordinating all of these is a generalized property list data base which contains both the data representing circuits and the procedures for constructing them. The authors first review the nature of the data base and then turn to DPL and Daedalus, the two most common ways of entering information into the data base. The next two sections review the specialized procedures for constructing PLAs and data paths; the final section describes a tool for hierarchical node extraction. 5 references.

  7. Formal verification an essential toolkit for modern VLSI design

    CERN Document Server

    Seligman, Erik; Kumar, M V Achutha Kiran

    2015-01-01

    Formal Verification: An Essential Toolkit for Modern VLSI Design presents practical approaches for design and validation, with hands-on advice for working engineers integrating these techniques into their work. Building on a basic knowledge of System Verilog, this book demystifies FV and presents the practical applications that are bringing it into mainstream design and validation processes at Intel and other companies. The text prepares readers to effectively introduce FV in their organization and deploy FV techniques to increase design and validation productivity. Presents formal verific

  8. VLSI architecture and design for the Fermat Number Transform implementation

    Energy Technology Data Exchange (ETDEWEB)

    Pajayakrit, A.

    1987-01-01

    A new technique of sectioning a pipelined transformer, using the Fermat Number Transform (FNT), is introduced. Also, a novel VLSI design which overcomes the problems of implementing FNTs, for use in fast convolution/correlation, is described. The design comprises one complete section of a pipelined transformer and may be programmed to function at any point in a forward or inverse pipeline, so allowing the construction of a pipelined convolver or correlator using identical chips, thus the favorable properties of the transform can be exploited. This overcomes the difficulty of fitting a complete pipeline onto one chip without resorting to the use of several different designs. The implementation of high-speed convolver/correlator using the VLSI chips has been successfully developed and tested. For impulse response lengths of up to 16 points the sampling rates of 0.5 MHz can be achieved. Finally, the filter speed performance using the FNT chips is compared to other designs and conclusions drawn on the merits of the FNT for this application. Also, the advantages and limitations of the FNT are analyzed, with respect to the more conventional FFT, and the results are provided.

  9. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  10. VLSI in medicine

    CERN Document Server

    Einspruch, Norman G

    1989-01-01

    VLSI Electronics Microstructure Science, Volume 17: VLSI in Medicine deals with the more important applications of VLSI in medical devices and instruments.This volume is comprised of 11 chapters. It begins with an article about medical electronics. The following three chapters cover diagnostic imaging, focusing on such medical devices as magnetic resonance imaging, neurometric analyzer, and ultrasound. Chapters 5, 6, and 7 present the impact of VLSI in cardiology. The electrocardiograph, implantable cardiac pacemaker, and the use of VLSI in Holter monitoring are detailed in these chapters. The

  11. VLSI Design of SVM-Based Seizure Detection System With On-Chip Learning Capability.

    Science.gov (United States)

    Feng, Lichen; Li, Zunchao; Wang, Yuanfa

    2018-02-01

    Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time-frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.

  12. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  13. VLSI electronics microstructure science

    CERN Document Server

    1982-01-01

    VLSI Electronics: Microstructure Science, Volume 4 reviews trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the silicon-on-insulator for VLSI and VHSIC, X-ray lithography, and transient response of electron transport in GaAs using the Monte Carlo method. The technology and manufacturing of high-density magnetic-bubble memories, metallic superlattices, challenge of education for VLSI, and impact of VLSI on medical signal processing are also elaborated. This text likewise covers the impact of VLSI t

  14. Using Software Technology to Specify Abstract Interfaces in VLSI Design.

    Science.gov (United States)

    1985-01-01

    with the complexity lev- els inherent in VLSI design, in that they can capitalize on their foundations in discrete mathemat- ics and the theory of...basis, rather than globally. Such a partitioning of module semantics makes the specification easier to construct and verify intelectual !y; it also...access function definitions. A standard language improves executability characteristics by capitalizing on portable, optimized system software developed

  15. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  16. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  17. VLSI implementations for image communications

    CERN Document Server

    Pirsch, P

    1993-01-01

    The past few years have seen a rapid growth in image processing and image communication technologies. New video services and multimedia applications are continuously being designed. Essential for all these applications are image and video compression techniques. The purpose of this book is to report on recent advances in VLSI architectures and their implementation for video signal processing applications with emphasis on video coding for bit rate reduction. Efficient VLSI implementation for video signal processing spans a broad range of disciplines involving algorithms, architectures, circuits

  18. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  19. Space station automation study: Automation requriements derived from space manufacturing concepts,volume 2

    Science.gov (United States)

    1984-01-01

    Automation reuirements were developed for two manufacturing concepts: (1) Gallium Arsenide Electroepitaxial Crystal Production and Wafer Manufacturing Facility, and (2) Gallium Arsenide VLSI Microelectronics Chip Processing Facility. A functional overview of the ultimate design concept incoporating the two manufacturing facilities on the space station are provided. The concepts were selected to facilitate an in-depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, sensors, and artificial intelligence. While the cost-effectiveness of these facilities was not analyzed, both appear entirely feasible for the year 2000 timeframe.

  20. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  1. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    National Research Council Canada - National Science Library

    Horiuchi, Timothy K; Krishnaprasad, P. S

    2007-01-01

    .... This includes multiple efforts related to a VLSI-based echolocation system being developed in one of our laboratories from algorithm development, bat flight data analysis, to VLSI circuit design...

  2. VLSI and system architecture-the new development of system 5G

    Energy Technology Data Exchange (ETDEWEB)

    Sakamura, K.; Sekino, A.; Kodaka, T.; Uehara, T.; Aiso, H.

    1982-01-01

    A research and development proposal is presented for VLSI CAD systems and for a hardware environment called system 5G on which the VLSI CAD systems run. The proposed CAD systems use a hierarchically organized design language to enable design of anything from basic architectures of VLSI to VLSI mask patterns in a uniform manner. The cad systems will eventually become intelligent cad systems that acquire design knowledge and perform automatic design of VLSI chips when the characteristic requirements of VLSI chip is given. System 5G will consist of superinference machines and the 5G communication network. The superinference machine will be built based on a functionally distributed architecture connecting inferommunication network. The superinference machine will be built based on a functionally distributed architecture connecting inference machines and relational data base machines via a high-speed local network. The transfer rate of the local network will be 100 mbps at the first stage of the project and will be improved to 1 gbps. Remote access to the superinference machine will be possible through the 5G communication network. Access to system 5G will use the 5G network architecture protocol. The users will access the system 5G using standardized 5G personal computers. 5G personal logic programming stations, very high intelligent terminals providing an instruction set that supports predicate logic and input/output facilities for audio and graphical information.

  3. Space station automation study: Automation requirements derived from space manufacturing concepts. Volume 1: Executive summary

    Science.gov (United States)

    1984-01-01

    The electroepitaxial process and the Very Large Scale Integration (VLSI) circuits (chips) facilities were chosen because each requires a very high degree of automation, and therefore involved extensive use of teleoperators, robotics, process mechanization, and artificial intelligence. Both cover a raw materials process and a sophisticated multi-step process and are therfore highly representative of the kinds of difficult operation, maintenance, and repair challenges which can be expected for any type of space manufacturing facility. Generic areas were identified which will require significant further study. The initial design will be based on terrestrial state-of-the-art hard automation. One hundred candidate missions were evaluated on the basis of automation portential and availability of meaning ful knowldege. The design requirements and unconstrained design concepts developed for the two missions are presented.

  4. Digital VLSI systems design a design manual for implementation of projects on FPGAs and ASICs using Verilog

    CERN Document Server

    Ramachandran, S

    2007-01-01

    Digital VLSI Systems Design is written for an advanced level course using Verilog and is meant for undergraduates, graduates and research scholars of Electrical, Electronics, Embedded Systems, Computer Engineering and interdisciplinary departments such as Bio Medical, Mechanical, Information Technology, Physics, etc. It serves as a reference design manual for practicing engineers and researchers as well. Diligent freelance readers and consultants may also start using this book with ease. The book presents new material and theory as well as synthesis of recent work with complete Project Designs

  5. VLSI 'smart' I/O module development

    Science.gov (United States)

    Kirk, Dan

    The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.

  6. VLSI signal processing technology

    CERN Document Server

    Swartzlander, Earl

    1994-01-01

    This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec­ tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al­ gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro­ cessors and architectures - several examples and case studies of existing DSP chips are discussed in...

  7. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  8. Application of evolutionary algorithms for multi-objective optimization in VLSI and embedded systems

    CERN Document Server

    2015-01-01

    This book describes how evolutionary algorithms (EA), including genetic algorithms (GA) and particle swarm optimization (PSO) can be utilized for solving multi-objective optimization problems in the area of embedded and VLSI system design. Many complex engineering optimization problems can be modelled as multi-objective formulations. This book provides an introduction to multi-objective optimization using meta-heuristic algorithms, GA and PSO, and how they can be applied to problems like hardware/software partitioning in embedded systems, circuit partitioning in VLSI, design of operational amplifiers in analog VLSI, design space exploration in high-level synthesis, delay fault testing in VLSI testing, and scheduling in heterogeneous distributed systems. It is shown how, in each case, the various aspects of the EA, namely its representation, and operators like crossover, mutation, etc. can be separately formulated to solve these problems. This book is intended for design engineers and researchers in the field ...

  9. Plasma processing for VLSI

    CERN Document Server

    Einspruch, Norman G

    1984-01-01

    VLSI Electronics: Microstructure Science, Volume 8: Plasma Processing for VLSI (Very Large Scale Integration) discusses the utilization of plasmas for general semiconductor processing. It also includes expositions on advanced deposition of materials for metallization, lithographic methods that use plasmas as exposure sources and for multiple resist patterning, and device structures made possible by anisotropic etching.This volume is divided into four sections. It begins with the history of plasma processing, a discussion of some of the early developments and trends for VLSI. The second section

  10. Design Implementation and Testing of a VLSI High Performance ASIC for Extracting the Phase of a Complex Signal

    National Research Council Canada - National Science Library

    Altmeyer, Ronald

    2002-01-01

    This thesis documents the research, circuit design, and simulation testing of a VLSI ASIC which extracts phase angle information from a complex sampled signal using the arctangent relationship: (phi=tan/-1 (Q/1...

  11. The VLSI handbook

    CERN Document Server

    Chen, Wai-Kai

    2007-01-01

    Written by a stellar international panel of expert contributors, this handbook remains the most up-to-date, reliable, and comprehensive source for real answers to practical problems. In addition to updated information in most chapters, this edition features several heavily revised and completely rewritten chapters, new chapters on such topics as CMOS fabrication and high-speed circuit design, heavily revised sections on testing of digital systems and design languages, and two entirely new sections on low-power electronics and VLSI signal processing. An updated compendium of references and othe

  12. Synthesis of on-chip control circuits for mVLSI biochips

    DEFF Research Database (Denmark)

    Potluri, Seetal; Schneider, Alexander Rüdiger; Hørslev-Petersen, Martin

    2017-01-01

    them to laboratory environments. To address this issue, researchers have proposed methods to reduce the number of offchip pressure sources, through integration of on-chip pneumatic control logic circuits fabricated using three-layer monolithic membrane valve technology. Traditionally, mVLSI biochip......-chip control circuit design and (iii) the integration of on-chip control in the placement and routing design tasks. In this paper we present a design methodology for logic synthesis and physical synthesis of mVLSI biochips that use on-chip control. We show how the proposed methodology can be successfully...... applied to generate biochip layouts with integrated on-chip pneumatic control....

  13. Pursuit, Avoidance, and Cohesion in Flight: Multi-Purpose Control Laws and Neuromorphic VLSI

    Science.gov (United States)

    2010-10-01

    spatial navigation in mammals. We have designed, fabricated, and are now testing a neuromorphic VLSI chip that implements a spike-based, attractor...Control Laws and Neuromorphic VLSI 5a. CONTRACT NUMBER 070402-7705 5b. GRANT NUMBER FA9550-07-1-0446 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...implementations (custom Neuromorphic VLSI and robotics) we will apply important practical constraints that can lead to deeper insight into how and why efficient

  14. Artificial immune system algorithm in VLSI circuit configuration

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  15. Design of a Low-Power VLSI Macrocell for Nonlinear Adaptive Video Noise Reduction

    Directory of Open Access Journals (Sweden)

    Sergio Saponara

    2004-09-01

    Full Text Available A VLSI macrocell for edge-preserving video noise reduction is proposed in the paper. It is based on a nonlinear rational filter enhanced by a noise estimator for blind and dynamic adaptation of the filtering parameters to the input signal statistics. The VLSI filter features a modular architecture allowing the extension of both mask size and filtering directions. Both spatial and spatiotemporal algorithms are supported. Simulation results with monochrome test videos prove its efficiency for many noise distributions with PSNR improvements up to 3.8 dB with respect to a nonadaptive solution. The VLSI macrocell has been realized in a 0.18 μm CMOS technology using a standard-cells library; it allows for real-time processing of main video formats, up to 30 fps (frames per second 4CIF, with a power consumption in the order of few mW.

  16. A VLSI image processor via pseudo-mersenne transforms

    International Nuclear Information System (INIS)

    Sei, W.J.; Jagadeesh, J.M.

    1986-01-01

    The computational burden on image processing in medical fields where a large amount of information must be processed quickly and accurately has led to consideration of special-purpose image processor chip design for some time. The very large scale integration (VLSI) resolution has made it cost-effective and feasible to consider the design of special purpose chips for medical imaging fields. This paper describes a VLSI CMOS chip suitable for parallel implementation of image processing algorithms and cyclic convolutions by using Pseudo-Mersenne Number Transform (PMNT). The main advantages of the PMNT over the Fast Fourier Transform (FFT) are: (1) no multiplications are required; (2) integer arithmetic is used. The design and development of this processor, which operates on 32-point convolution or 5 x 5 window image, are described

  17. Digital VLSI design with Verilog a textbook from Silicon Valley Polytechnic Institute

    CERN Document Server

    Williams, John Michael

    2014-01-01

    This book is structured as a step-by-step course of study along the lines of a VLSI integrated circuit design project.  The entire Verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer-deserializer, including synthesizable PLLs.  The author includes everything an engineer needs for in-depth understanding of the Verilog language:  Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided in the downloadable files that accompany the book.  For readers with access to appropriate electronic design tools, all solutions can be developed, simulated, and synthesized as described in the book.   A partial list of design topics includes design partitioning, hierarchy decomposition, safe coding styles, back annotation, wrapper modules, concurrency, race conditions, assertion-based verification, clock synchronization, and design for test.   A concluding presentation of special topics inclu...

  18. Lithography for VLSI

    CERN Document Server

    Einspruch, Norman G

    1987-01-01

    VLSI Electronics Microstructure Science, Volume 16: Lithography for VLSI treats special topics from each branch of lithography, and also contains general discussion of some lithographic methods.This volume contains 8 chapters that discuss the various aspects of lithography. Chapters 1 and 2 are devoted to optical lithography. Chapter 3 covers electron lithography in general, and Chapter 4 discusses electron resist exposure modeling. Chapter 5 presents the fundamentals of ion-beam lithography. Mask/wafer alignment for x-ray proximity printing and for optical lithography is tackled in Chapter 6.

  19. Surface and interface effects in VLSI

    CERN Document Server

    Einspruch, Norman G

    1985-01-01

    VLSI Electronics Microstructure Science, Volume 10: Surface and Interface Effects in VLSI provides the advances made in the science of semiconductor surface and interface as they relate to electronics. This volume aims to provide a better understanding and control of surface and interface related properties. The book begins with an introductory chapter on the intimate link between interfaces and devices. The book is then divided into two parts. The first part covers the chemical and geometric structures of prototypical VLSI interfaces. Subjects detailed include, the technologically most import

  20. VLSI Architectures for the Multiplication of Integers Modulo a Fermat Number

    Science.gov (United States)

    Chang, J. J.; Truong, T. K.; Reed, I. S.; Hsu, I. S.

    1984-01-01

    Multiplication is central in the implementation of Fermat number transforms and other residue number algorithms. There is need for a good multiplication algorithm that can be realized easily on a very large scale integration (VLSI) chip. The Leibowitz multiplier is modified to realize multiplication in the ring of integers modulo a Fermat number. This new algorithm requires only a sequence of cyclic shifts and additions. The designs developed for this new multiplier are regular, simple, expandable, and, therefore, suitable for VLSI implementation.

  1. UW VLSI chip tester

    Science.gov (United States)

    McKenzie, Neil

    1989-12-01

    We present a design for a low-cost, functional VLSI chip tester. It is based on the Apple MacIntosh II personal computer. It tests chips that have up to 128 pins. All pin drivers of the tester are bidirectional; each pin is programmed independently as an input or an output. The tester can test both static and dynamic chips. Rudimentary speed testing is provided. Chips are tested by executing C programs written by the user. A software library is provided for program development. Tests run under both the Mac Operating System and A/UX. The design is implemented using Xilinx Logic Cell Arrays. Price/performance tradeoffs are discussed.

  2. Towards an Analogue Neuromorphic VLSI Instrument for the Sensing of Complex Odours

    Science.gov (United States)

    Ab Aziz, Muhammad Fazli; Harun, Fauzan Khairi Che; Covington, James A.; Gardner, Julian W.

    2011-09-01

    Almost all electronic nose instruments reported today employ pattern recognition algorithms written in software and run on digital processors, e.g. micro-processors, microcontrollers or FPGAs. Conversely, in this paper we describe the analogue VLSI implementation of an electronic nose through the design of a neuromorphic olfactory chip. The modelling, design and fabrication of the chip have already been reported. Here a smart interface has been designed and characterised for thisneuromorphic chip. Thus we can demonstrate the functionality of the a VLSI neuromorphic chip, producing differing principal neuron firing patterns to real sensor response data. Further work is directed towards integrating 9 separate neuromorphic chips to create a large neuronal network to solve more complex olfactory problems.

  3. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  4. Electro-optic techniques for VLSI interconnect

    Science.gov (United States)

    Neff, J. A.

    1985-03-01

    A major limitation to achieving significant speed increases in very large scale integration (VLSI) lies in the metallic interconnects. They are costly not only from the charge transport standpoint but also from capacitive loading effects. The Defense Advanced Research Projects Agency, in pursuit of the fifth generation supercomputer, is investigating alternatives to the VLSI metallic interconnects, especially the use of optical techniques to transport the information either inter or intrachip. As the on chip performance of VLSI continues to improve via the scale down of the logic elements, the problems associated with transferring data off and onto the chip become more severe. The use of optical carriers to transfer the information within the computer is very appealing from several viewpoints. Besides the potential for gigabit propagation rates, the conversion from electronics to optics conveniently provides a decoupling of the various circuits from one another. Significant gains will also be realized in reducing cross talk between the metallic routings, and the interconnects need no longer be constrained to the plane of a thin film on the VLSI chip. In addition, optics can offer an increased programming flexibility for restructuring the interconnect network.

  5. Development of Radhard VLSI electronics for SSC calorimeters

    International Nuclear Information System (INIS)

    Dawson, J.W.; Nodulman, L.J.

    1989-01-01

    A new program of development of integrated electronics for liquid argon calorimeters in the SSC detector environment is being started at Argonne National Laboratory. Scientists from Brookhaven National Laboratory and Vanderbilt University together with an industrial participants are expected to collaborate in this work. Interaction rates, segmentation, and the radiation environment dictate that front-end electronics of SSC calorimeters must be implemented in the form of highly integrated, radhard, analog, low noise, VLSI custom monolithic devices. Important considerations are power dissipation, choice of functions integrated on the front-end chips, and cabling requirements. An extensive level of expertise in radhard electronics exists within the industrial community, and a primary objective of this work is to bring that expertise to bear on the problems of SSC detector design. Radiation hardness measurements and requirements as well as calorimeter design will be primarily the responsibility of Argonne scientists and our Brookhaven and Vanderbilt colleagues. Radhard VLSI design and fabrication will be primarily the industrial participant's responsibility. The rapid-cycling synchrotron at Argonne will be used for radiation damage studies involving response to neutrons and charged particles, while damage from gammas will be investigated at Brookhaven. 10 refs., 6 figs., 2 tabs

  6. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam

    2011-12-01

    Power dissipation poses a great challenge for VLSI designers. With the intense down-scaling of technology, the total power consumption of the chip is made up primarily of leakage power dissipation. This paper proposes combining a custom-designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result of implementing this novel power gating technique, a standby leakage power reduction of 99% and energy savings of 33.3% are achieved. Finally the possible effects of surge currents and ground bounce noise are studied. These findings allow longer operation times for battery-operated systems characterized by long standby periods. © 2011 IEEE.

  7. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  8. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    Science.gov (United States)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  9. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam; Ghoneim, Mohamed T.; El Boghdady, Nawal; Halawa, Sarah; Iskander, Sophinese M.; Anis, Mohab H.

    2011-01-01

    -designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result

  10. VLSI structures for track finding

    International Nuclear Information System (INIS)

    Dell'Orso, M.

    1989-01-01

    We discuss the architecture of a device based on the concept of associative memory designed to solve the track finding problem, typical of high energy physics experiments, in a time span of a few microseconds even for very high multiplicity events. This ''machine'' is implemented as a large array of custom VLSI chips. All the chips are equal and each of them stores a number of ''patterns''. All the patterns in all the chips are compared in parallel to the data coming from the detector while the detector is being read out. (orig.)

  11. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  12. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  13. Design of 10Gbps optical encoder/decoder structure for FE-OCDMA system using SOA and opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Hwang, Seow; Alameh, Kamal

    2008-01-21

    In this paper we propose and experimentally demonstrate a reconfigurable 10Gbps frequency-encoded (1D) encoder/decoder structure for optical code division multiple access (OCDMA). The encoder is constructed using a single semiconductor optical amplifier (SOA) and 1D reflective Opto-VLSI processor. The SOA generates broadband amplified spontaneous emission that is dynamically sliced using digital phase holograms loaded onto the Opto-VLSI processor to generate 1D codewords. The selected wavelengths are injected back into the same SOA for amplifications. The decoder is constructed using single Opto-VLSI processor only. The encoded signal can successfully be retrieved at the decoder side only when the digital phase holograms of the encoder and the decoder are matched. The system performance is measured in terms of the auto-correlation and cross-correlation functions as well as the eye diagram.

  14. VLSI design of an RSA encryption/decryption chip using systolic array based architecture

    Science.gov (United States)

    Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi

    2016-09-01

    This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.

  15. Parallel VLSI Architecture

    Science.gov (United States)

    Truong, T. K.; Reed, I.; Yeh, C.; Shao, H.

    1985-01-01

    Fermat number transformation convolutes two digital data sequences. Very-large-scale integration (VLSI) applications, such as image and radar signal processing, X-ray reconstruction, and spectrum shaping, linear convolution of two digital data sequences of arbitrary lenghts accomplished using Fermat number transform (ENT).

  16. The GLUEchip: A custom VLSI chip for detectors readout and associative memories circuits

    International Nuclear Information System (INIS)

    Amendolia, S.R.; Galeotti, S.; Morsani, F.; Passuello, D.; Ristori, L.; Turini, N.

    1993-01-01

    An associative memory full-custom VLSI chip for pattern recognition has been designed and tested in the past years. It's the AMchip, that contains 128 patterns of 60 bits each. To expand the pattern capacity of an Associative Memory bank, the custom VLSI GLUEchip has been developed. The GLUEchip allows the interconnection of up to 16 AMchips or up to 16 GLUEchips: the resulting tree-like structure works like a single AMchip with an output pipelined structure and a pattern capacity increased by a factor 16 for each GLUEchip used

  17. Opto-VLSI-based reconfigurable free-space optical interconnects architecture

    DEFF Research Database (Denmark)

    Aljada, Muhsen; Alameh, Kamal; Chung, Il-Sug

    2007-01-01

    is the Opto-VLSI processor which can be driven by digital phase steering and multicasting holograms that reconfigure the optical interconnects between the input and output ports. The optical interconnects architecture is experimentally demonstrated at 2.5 Gbps using high-speed 1×3 VCSEL array and 1......×3 photoreceiver array in conjunction with two 1×4096 pixel Opto-VLSI processors. The minimisation of the crosstalk between the output ports is achieved by appropriately aligning the VCSEL and PD elements with respect to the Opto-VLSI processors and driving the latter with optimal steering phase holograms....

  18. Carbon nanotube based VLSI interconnects analysis and design

    CERN Document Server

    Kaushik, Brajesh Kumar

    2015-01-01

    The brief primarily focuses on the performance analysis of CNT based interconnects in current research scenario. Different CNT structures are modeled on the basis of transmission line theory. Performance comparison for different CNT structures illustrates that CNTs are more promising than Cu or other materials used in global VLSI interconnects. The brief is organized into five chapters which mainly discuss: (1) an overview of current research scenario and basics of interconnects; (2) unique crystal structures and the basics of physical properties of CNTs, and the production, purification and applications of CNTs; (3) a brief technical review, the geometry and equivalent RLC parameters for different single and bundled CNT structures; (4) a comparative analysis of crosstalk and delay for different single and bundled CNT structures; and (5) various unique mixed CNT bundle structures and their equivalent electrical models.

  19. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  20. The test of VLSI circuits

    Science.gov (United States)

    Baviere, Ph.

    Tests which have proven effective for evaluating VLSI circuits for space applications are described. It is recommended that circuits be examined after each manfacturing step to gain fast feedback on inadequacies in the production system. Data from failure modes which occur during operational lifetimes of circuits also permit redefinition of the manufacturing and quality control process to eliminate the defects identified. Other tests include determination of the operational envelope of the circuits, examination of the circuit response to controlled inputs, and the performance and functional speeds of ROM and RAM memories. Finally, it is desirable that all new circuits be designed with testing in mind.

  1. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  2. Generic Automated Multi-function Finger Design

    Science.gov (United States)

    Honarpardaz, M.; Tarkian, M.; Sirkett, D.; Ölvander, J.; Feng, X.; Elf, J.; Sjögren, R.

    2016-11-01

    Multi-function fingers that are able to handle multiple workpieces are crucial in improvement of a robot workcell. Design automation of multi-function fingers is highly demanded by robot industries to overcome the current iterative, time consuming and complex manual design process. However, the existing approaches for the multi-function finger design automation are unable to entirely meet the robot industries’ need. This paper proposes a generic approach for design automation of multi-function fingers. The proposed approach completely automates the design process and requires no expert skill. In addition, this approach executes the design process much faster than the current manual process. To validate the approach, multi-function fingers are successfully designed for two case studies. Further, the results are discussed and benchmarked with existing approaches.

  3. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  4. Mechatronic Design Automation

    DEFF Research Database (Denmark)

    Fan, Zhun

    successfully design analogue filters, vibration absorbers, micro-electro-mechanical systems, and vehicle suspension systems, all in an automatic or semi-automatic way. It also investigates the very important issue of co-designing plant-structures and dynamic controllers in automated design of Mechatronic...

  5. VLSI System Implementation of 200 MHz, 8-bit, 90nm CMOS Arithmetic and Logic Unit (ALU Processor Controller

    Directory of Open Access Journals (Sweden)

    Fazal NOORBASHA

    2012-08-01

    Full Text Available In this present study includes the Very Large Scale Integration (VLSI system implementation of 200MHz, 8-bit, 90nm Complementary Metal Oxide Semiconductor (CMOS Arithmetic and Logic Unit (ALU processor control with logic gate design style and 0.12µm six metal 90nm CMOS fabrication technology. The system blocks and the behaviour are defined and the logical design is implemented in gate level in the design phase. Then, the logic circuits are simulated and the subunits are converted in to 90nm CMOS layout. Finally, in order to construct the VLSI system these units are placed in the floor plan and simulated with analog and digital, logic and switch level simulators. The results of the simulations indicates that the VLSI system can control different instructions which can divided into sub groups: transfer instructions, arithmetic and logic instructions, rotate and shift instructions, branch instructions, input/output instructions, control instructions. The data bus of the system is 16-bit. It runs at 200MHz, and operating power is 1.2V. In this paper, the parametric analysis of the system, the design steps and obtained results are explained.

  6. Macrocell Builder: IP-Block-Based Design Environment for High-Throughput VLSI Dedicated Digital Signal Processing Systems

    Directory of Open Access Journals (Sweden)

    Urard Pascal

    2006-01-01

    Full Text Available We propose an efficient IP-block-based design environment for high-throughput VLSI systems. The flow generates SystemC register-transfer-level (RTL architecture, starting from a Matlab functional model described as a netlist of functional IP. The refinement model inserts automatically control structures to manage delays induced by the use of RTL IPs. It also inserts a control structure to coordinate the execution of parallel clocked IP. The delays may be managed by registers or by counters included in the control structure. The flow has been used successfully in three real-world DSP systems. The experimentations show that the approach can produce efficient RTL architecture and allows to save huge amount of time.

  7. Nano lasers in photonic VLSI

    NARCIS (Netherlands)

    Hill, M.T.; Oei, Y.S.; Smit, M.K.

    2007-01-01

    We examine the use of micro and nano lasers to form digital photonic VLSI building blocks. Problems such as isolation and cascading of building blocks are addressed, and the potential of future nano lasers explored.

  8. An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits

    Science.gov (United States)

    Corliss, Walter F., II

    1989-03-01

    The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.

  9. The AMchip: A VLSI associative memory for track finding

    International Nuclear Information System (INIS)

    Morsani, F.; Galeotti, S.; Passuello, D.; Amendolia, S.R.; Ristori, L.; Turini, N.

    1992-01-01

    An associative memory to be used for super-fast track finding in future high energy physics experiments, has been implemented on silicon as a full-custom CMOS VLSI chip (the AMchip). The first prototype has been designed and successfully tested at INFN in Pisa. It is implemented in 1.6 μm, double metal, silicon gate CMOS technology and contains about 140 000 MOS transistors on a 1x1 cm 2 silicon chip. (orig.)

  10. Hardware/software co-design and optimization for cyberphysical integration in digital microfluidic biochips

    CERN Document Server

    Luo, Yan; Ho, Tsung-Yi

    2015-01-01

    This book describes a comprehensive framework for hardware/software co-design, optimization, and use of robust, low-cost, and cyberphysical digital microfluidic systems. Readers with a background in electronic design automation will find this book to be a valuable reference for leveraging conventional VLSI CAD techniques for emerging technologies, e.g., biochips or bioMEMS. Readers from the circuit/system design community will benefit from methods presented to extend design and testing techniques from microelectronics to mixed-technology microsystems. For readers from the microfluidics domain,

  11. Fast-prototyping of VLSI

    International Nuclear Information System (INIS)

    Saucier, G.; Read, E.

    1987-01-01

    Fast-prototyping will be a reality in the very near future if both straightforward design methods and fast manufacturing facilities are available. This book focuses, first, on the motivation for fast-prototyping. Economic aspects and market considerations are analysed by European and Japanese companies. In the second chapter, new design methods are identified, mainly for full custom circuits. Of course, silicon compilers play a key role and the introduction of artificial intelligence techniques sheds a new light on the subject. At present, fast-prototyping on gate arrays or on standard cells is the most conventional technique and the third chapter updates the state-of-the art in this area. The fourth chapter concentrates specifically on the e-beam direct-writing for submicron IC technologies. In the fifth chapter, a strategic point in fast-prototyping, namely the test problem is addressed. The design for testability and the interface to the test equipment are mandatory to fulfill the test requirement for fast-prototyping. Finally, the last chapter deals with the subject of education when many people complain about the lack of use of fast-prototyping in higher education for VLSI

  12. Development methods for VLSI-processors

    International Nuclear Information System (INIS)

    Horninger, K.; Sandweg, G.

    1982-01-01

    The aim of this project, which was originally planed for 3 years, was the development of modern system and circuit concepts, for VLSI-processors having a 32 bit wide data path. The result of this first years work is the concept of a general purpose processor. This processor is not only logically but also physically (on the chip) divided into four functional units: a microprogrammable instruction unit, an execution unit in slice technique, a fully associative cache memory and an I/O unit. For the ALU of the execution unit circuits in PLA and slice techniques have been realized. On the basis of regularity, area consumption and achievable performance the slice technique has been prefered. The designs utilize selftesting circuitry. (orig.) [de

  13. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  14. Point DCT VLSI Architecture for Emerging HEVC Standard

    OpenAIRE

    Ahmed, Ashfaq; Shahid, Muhammad Usman; Rehman, Ata ur

    2012-01-01

    This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4 × 4 up to 3 2 × 3 2 , the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into ...

  15. Lithography requirements in complex VLSI device fabrication

    International Nuclear Information System (INIS)

    Wilson, A.D.

    1985-01-01

    Fabrication of complex very large scale integration (VLSI) circuits requires continual advances in lithography to satisfy: decreasing minimum linewidths, larger chip sizes, tighter linewidth and overlay control, increasing topography to linewidth ratios, higher yield demands, increased throughput, harsher device processing, lower lithography cost, and a larger part number set with quick turn-around time. Where optical, electron beam, x-ray, and ion beam lithography can be applied to judiciously satisfy the complex VLSI circuit fabrication requirements is discussed and those areas that are in need of major further advances are addressed. Emphasis will be placed on advanced electron beam and storage ring x-ray lithography

  16. VLSI micro- and nanophotonics science, technology, and applications

    CERN Document Server

    Lee, El-Hang; Razeghi, Manijeh; Jagadish, Chennupati

    2011-01-01

    Addressing the growing demand for larger capacity in information technology, VLSI Micro- and Nanophotonics: Science, Technology, and Applications explores issues of science and technology of micro/nano-scale photonics and integration for broad-scale and chip-scale Very Large Scale Integration photonics. This book is a game-changer in the sense that it is quite possibly the first to focus on ""VLSI Photonics"". Very little effort has been made to develop integration technologies for micro/nanoscale photonic devices and applications, so this reference is an important and necessary early-stage pe

  17. A novel configurable VLSI architecture design of window-based image processing method

    Science.gov (United States)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  18. Design Automation in Synthetic Biology.

    Science.gov (United States)

    Appleton, Evan; Madsen, Curtis; Roehner, Nicholas; Densmore, Douglas

    2017-04-03

    Design automation refers to a category of software tools for designing systems that work together in a workflow for designing, building, testing, and analyzing systems with a target behavior. In synthetic biology, these tools are called bio-design automation (BDA) tools. In this review, we discuss the BDA tools areas-specify, design, build, test, and learn-and introduce the existing software tools designed to solve problems in these areas. We then detail the functionality of some of these tools and show how they can be used together to create the desired behavior of two types of modern synthetic genetic regulatory networks. Copyright © 2017 Cold Spring Harbor Laboratory Press; all rights reserved.

  19. ORGANIZATION OF GRAPHIC INFORMATION FOR VIEWING THE MULTILAYER VLSI TOPOLOGY

    Directory of Open Access Journals (Sweden)

    V. I. Romanov

    2016-01-01

    Full Text Available One of the possible ways to reorganize of graphical information describing the set of topology layers of modern VLSI. The method is directed on the use in the conditions of the bounded size of video card memory. An additional effect, providing high performance of forming multi- image layout a multi-layer topology of modern VLSI, is achieved by preloading the required texture by means of auxiliary background process.

  20. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  1. ASTROS: A multidisciplinary automated structural design tool

    Science.gov (United States)

    Neill, D. J.

    1989-01-01

    ASTROS (Automated Structural Optimization System) is a finite-element-based multidisciplinary structural optimization procedure developed under Air Force sponsorship to perform automated preliminary structural design. The design task is the determination of the structural sizes that provide an optimal structure while satisfying numerous constraints from many disciplines. In addition to its automated design features, ASTROS provides a general transient and frequency response capability, as well as a special feature to perform a transient analysis of a vehicle subjected to a nuclear blast. The motivation for the development of a single multidisciplinary design tool is that such a tool can provide improved structural designs in less time than is currently needed. The role of such a tool is even more apparent as modern materials come into widespread use. Balancing conflicting requirements for the structure's strength and stiffness while exploiting the benefits of material anisotropy is perhaps an impossible task without assistance from an automated design tool. Finally, the use of a single tool can bring the design task into better focus among design team members, thereby improving their insight into the overall task.

  2. An Intelligent Automation Platform for Rapid Bioprocess Design.

    Science.gov (United States)

    Wu, Tianyi; Zhou, Yuhong

    2014-08-01

    Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.

  3. PLA realizations for VLSI state machines

    Science.gov (United States)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  4. Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.

    Science.gov (United States)

    Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert

    2017-12-01

    Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.

  5. An Intelligent Automation Platform for Rapid Bioprocess Design

    Science.gov (United States)

    Wu, Tianyi

    2014-01-01

    Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579

  6. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    Science.gov (United States)

    2007-03-31

    IFinal 03/01/04 - 02/28/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Neuromorphic VLSI-based Bat Echolocation for Micro-aerial 5b.GRANTNUMBER Vehicle...uncovered interesting new issues in our choice for representing the intensity of signals. We have just finished testing the first chip version of an echo...timing-based algorithm (’openspace’) for sonar-guided navigation amidst multiple obstacles. 15. SUBJECT TERMS Neuromorphic VLSI, bat echolocation

  7. International Conference on VLSI, Communication, Advanced Devices, Signals & Systems and Networking

    CERN Document Server

    Shirur, Yasha; Prasad, Rekha

    2013-01-01

    This book is a collection of papers presented by renowned researchers, keynote speakers and academicians in the International Conference on VLSI, Communication, Analog Designs, Signals and Systems, and Networking (VCASAN-2013), organized by B.N.M. Institute of Technology, Bangalore, India during July 17-19, 2013. The book provides global trends in cutting-edge technologies in electronics and communication engineering. The content of the book is useful to engineers, researchers and academicians as well as industry professionals.

  8. An Asynchronous Low Power and High Performance VLSI Architecture for Viterbi Decoder Implemented with Quasi Delay Insensitive Templates

    Directory of Open Access Journals (Sweden)

    T. Kalavathi Devi

    2015-01-01

    Full Text Available Convolutional codes are comprehensively used as Forward Error Correction (FEC codes in digital communication systems. For decoding of convolutional codes at the receiver end, Viterbi decoder is often used to have high priority. This decoder meets the demand of high speed and low power. At present, the design of a competent system in Very Large Scale Integration (VLSI technology requires these VLSI parameters to be finely defined. The proposed asynchronous method focuses on reducing the power consumption of Viterbi decoder for various constraint lengths using asynchronous modules. The asynchronous designs are based on commonly used Quasi Delay Insensitive (QDI templates, namely, Precharge Half Buffer (PCHB and Weak Conditioned Half Buffer (WCHB. The functionality of the proposed asynchronous design is simulated and verified using Tanner Spice (TSPICE in 0.25 µm, 65 nm, and 180 nm technologies of Taiwan Semiconductor Manufacture Company (TSMC. The simulation result illustrates that the asynchronous design techniques have 25.21% of power reduction compared to synchronous design and work at a speed of 475 MHz.

  9. BioCMOS Interfaces and Co-Design

    CERN Document Server

    Carrara, Sandro

    2013-01-01

    The application of CMOS circuits and ASIC VLSI systems to problems in medicine and system biology has led to the emergence of Bio/CMOS Interfaces and Co-Design as an exciting and rapidly growing area of research. The mutual inter-relationships between VLSI-CMOS design and the biophysics of molecules interfacing with silicon and/or onto metals has led to the emergence of the interdisciplinary engineering approach to Bio/CMOS interfaces. This new approach, facilitated by 3D circuit design and nanotechnology, has resulted in new concepts and applications for VLSI systems in the bio-world. This book offers an invaluable reference to the state-of-the-art in Bio/CMOS interfaces. It describes leading-edge research in the field of CMOS design and VLSI development for applications requiring integration of biological molecules onto the chip. It provides multidisciplinary content ranging from biochemistry to CMOS design in order to address Bio/CMOS interface co-design in bio-sensing applications.

  10. Physico-topological methods of increasing stability of the VLSI circuit components to irradiation. Fiziko-topologhicheskie sposoby uluchsheniya radiatsionnoj stojkosti komponentov BIS

    Energy Technology Data Exchange (ETDEWEB)

    Pereshenkov, V S [MIFI, Moscow, (Russian Federation); Shishianu, F S; Rusanovskij, V I [S. Lazo KPI, Chisinau, (Moldova, Republic of)

    1992-01-01

    The paper presents the method used and the experimental results obtained for 8-bit microprocessor irradiated with [gamma]-rays and neutrons. The correlation between the electrical and technological parameters with the irradiation ones is revealed. The influence of leakage current between devices incorporated in VLSI circuits was studied. The obtained results create the possibility to determine the technological parameters necessary for designing the circuit able to work at predetermined doses. The necessary substrate doping concentration for isolation which eliminates the leakage current between devices prevents the VLSI circuit break down was determined. (Author).

  11. A second generation 50 Mbps VLSI level zero processing system prototype

    Science.gov (United States)

    Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby

    1994-01-01

    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.

  12. A multi coding technique to reduce transition activity in VLSI circuits

    International Nuclear Information System (INIS)

    Vithyalakshmi, N.; Rajaram, M.

    2014-01-01

    Advances in VLSI technology have enabled the implementation of complex digital circuits in a single chip, reducing system size and power consumption. In deep submicron low power CMOS VLSI design, the main cause of energy dissipation is charging and discharging of internal node capacitances due to transition activity. Transition activity is one of the major factors that also affect the dynamic power dissipation. This paper proposes power reduction analyzed through algorithm and logic circuit levels. In algorithm level the key aspect of reducing power dissipation is by minimizing transition activity and is achieved by introducing a data coding technique. So a novel multi coding technique is introduced to improve the efficiency of transition activity up to 52.3% on the bus lines, which will automatically reduce the dynamic power dissipation. In addition, 1 bit full adders are introduced in the Hamming distance estimator block, which reduces the device count. This coding method is implemented using Verilog HDL. The overall performance is analyzed by using Modelsim and Xilinx Tools. In total 38.2% power saving capability is achieved compared to other existing methods. (semiconductor technology)

  13. Emerging Applications for High K Materials in VLSI Technology

    Science.gov (United States)

    Clark, Robert D.

    2014-01-01

    The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI) manufacturing for leading edge Dynamic Random Access Memory (DRAM) and Complementary Metal Oxide Semiconductor (CMOS) applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM) diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD) is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing. PMID:28788599

  14. Emerging Applications for High K Materials in VLSI Technology

    Directory of Open Access Journals (Sweden)

    Robert D. Clark

    2014-04-01

    Full Text Available The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI manufacturing for leading edge Dynamic Random Access Memory (DRAM and Complementary Metal Oxide Semiconductor (CMOS applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing.

  15. VLSI Design of a Variable-Length FFT/IFFT Processor for OFDM-Based Communication Systems

    Directory of Open Access Journals (Sweden)

    Jen-Chih Kuo

    2003-12-01

    Full Text Available The technique of {orthogonal frequency division multiplexing (OFDM} is famous for its robustness against frequency-selective fading channel. This technique has been widely used in many wired and wireless communication systems. In general, the {fast Fourier transform (FFT} and {inverse FFT (IFFT} operations are used as the modulation/demodulation kernel in the OFDM systems, and the sizes of FFT/IFFT operations are varied in different applications of OFDM systems. In this paper, we design and implement a variable-length prototype FFT/IFFT processor to cover different specifications of OFDM applications. The cached-memory FFT architecture is our suggested VLSI system architecture to design the prototype FFT/IFFT processor for the consideration of low-power consumption. We also implement the twiddle factor butterfly {processing element (PE} based on the {{coordinate} rotation digital computer (CORDIC} algorithm, which avoids the use of conventional multiplication-and-accumulation unit, but evaluates the trigonometric functions using only add-and-shift operations. Finally, we implement a variable-length prototype FFT/IFFT processor with TSMC 0.35 μm 1P4M CMOS technology. The simulations results show that the chip can perform (64-2048-point FFT/IFFT operations up to 80 MHz operating frequency which can meet the speed requirement of most OFDM standards such as WLAN, ADSL, VDSL (256∼2K, DAB, and 2K-mode DVB.

  16. Designing and implementing test automation frameworks with QTP

    CERN Document Server

    Bhargava, Ashish

    2013-01-01

    A tutorial-based approach, showing basic coding and designing techniques to build test automation frameworks.If you are a beginner, an automation engineer, an aspiring test automation engineer, a manual tester, a test lead or a test architect who wants to learn, create, and maintain test automation frameworks, this book will accelerate your ability to develop and adapt the framework.

  17. Embedded Processor Based Automatic Temperature Control of VLSI Chips

    Directory of Open Access Journals (Sweden)

    Narasimha Murthy Yayavaram

    2009-01-01

    Full Text Available This paper presents embedded processor based automatic temperature control of VLSI chips, using temperature sensor LM35 and ARM processor LPC2378. Due to the very high packing density, VLSI chips get heated very soon and if not cooled properly, the performance is very much affected. In the present work, the sensor which is kept very near proximity to the IC will sense the temperature and the speed of the fan arranged near to the IC is controlled based on the PWM signal generated by the ARM processor. A buzzer is also provided with the hardware, to indicate either the failure of the fan or overheating of the IC. The entire process is achieved by developing a suitable embedded C program.

  18. Automated Design of Board and MCM Level Digital Systems.

    Science.gov (United States)

    1997-10-01

    Object- Oriented Programming, 7(6):39-49, October 1994. 46 December 14, 1994 33 [3] Stephen J. Garland, John V. Guttag, and James J. Horning...of Digital Circuits. Mc Graw Hill, 1994. 15 APPENDIX G: ... 93 Multicomponent Partitioning for VLSI System Synthesis Nand Kumar and Ranga Vemuri

  19. Design Automation Algorithm for Soft Robots

    Data.gov (United States)

    National Aeronautics and Space Administration — The majority of design to manufacturing today is still an ad hoc and empirical process. There is a direct need for a single, automated design and fabrication...

  20. An SEU analysis approach for error propagation in digital VLSI CMOS ASICs

    International Nuclear Information System (INIS)

    Baze, M.P.; Bartholet, W.G.; Dao, T.A.; Buchner, S.

    1995-01-01

    A critical issue in the development of ASIC designs is the ability to achieve first pass fabrication success. Unsuccessful fabrication runs have serious impact on ASIC costs and schedules. The ability to predict an ASICs radiation response prior to fabrication is therefore a key issue when designing ASICs for military and aerospace systems. This paper describes an analysis approach for calculating static bit error propagation in synchronous VLSI CMOS circuits developed as an aid for predicting the SEU response of ASIC's. The technique is intended for eventual application as an ASIC development simulation tool which can be used by circuit design engineers for performance evaluation during the pre-fabrication design process in much the same way that logic and timing simulators are used

  1. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  2. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  3. Point DCT VLSI Architecture for Emerging HEVC Standard

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmed

    2012-01-01

    Full Text Available This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4×4 up to 32×32, the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into sparse submatrices in order to reduce the multiplications. Finally, multiplications are completely eliminated using the lifting scheme. The proposed architecture sustains real-time processing of 1080P HD video codec running at 150 MHz.

  4. Designing a Software Test Automation Framework

    Directory of Open Access Journals (Sweden)

    Sabina AMARICAI

    2014-01-01

    Full Text Available Testing is an art and science that should ultimately lead to lower cost businesses through increasing control and reducing risk. Testing specialists should thoroughly understand the system or application from both the technical and the business perspective, and then design, build and implement the minimum-cost, maximum-coverage validation framework. Test Automation is an important ingredient for testing large scale applications. In this paper we discuss several test automation frameworks, their advantages and disadvantages. We also propose a custom automation framework model that is suited for applications with very complex business requirements and numerous interfaces.

  5. Automated design of degenerate codon libraries.

    Science.gov (United States)

    Mena, Marco A; Daugherty, Patrick S

    2005-12-01

    Degenerate codon libraries are frequently used in protein engineering and evolution studies but are often limited to targeting a small number of positions to adequately limit the search space. To mitigate this, codon degeneracy can be limited using heuristics or previous knowledge of the targeted positions. To automate design of libraries given a set of amino acid sequences, an algorithm (LibDesign) was developed that generates a set of possible degenerate codon libraries, their resulting size, and their score relative to a user-defined scoring function. A gene library of a specified size can then be constructed that is representative of the given amino acid distribution or that includes specific sequences or combinations thereof. LibDesign provides a new tool for automated design of high-quality protein libraries that more effectively harness existing sequence-structure information derived from multiple sequence alignment or computational protein design data.

  6. A Knowledge Based Approach to VLSI CAD

    Science.gov (United States)

    1983-09-01

    Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to

  7. Model-Based approaches to Human-Automation Systems Design

    DEFF Research Database (Denmark)

    Jamieson, Greg A.; Andersson, Jonas; Bisantz, Ann

    2012-01-01

    Human-automation interaction in complex systems is common, yet design for this interaction is often conducted without explicit consideration of the role of the human operator. Fortunately, there are a number of modeling frameworks proposed for supporting this design activity. However...... (and reportedly one or two critics) can engage one another on several agreed questions about such frameworks. The goal is to aid non-aligned practitioners in choosing between alternative frameworks for their human-automation interaction design challenges....

  8. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  9. Controller Design Automation for Aeroservoelastic Design Optimization of Wind Turbines

    NARCIS (Netherlands)

    Ashuri, T.; Van Bussel, G.J.W.; Zaayer, M.B.; Van Kuik, G.A.M.

    2010-01-01

    The purpose of this paper is to integrate the controller design of wind turbines with structure and aerodynamic analysis and use the final product in the design optimization process (DOP) of wind turbines. To do that, the controller design is automated and integrated with an aeroelastic simulation

  10. Automated mixed traffic vehicle design AMTV 2

    Science.gov (United States)

    Johnston, A. R.; Marks, R. A.; Cassell, P. L.

    1982-01-01

    The design of an improved and enclosed Automated Mixed Traffic Transit (AMTT) vehicle is described. AMTT is an innovative concept for low-speed tram-type transit in which suitable vehicles are equipped with sensors and controls to permit them to operate in an automated mode on existing road or walkway surfaces. The vehicle chassis and body design are presented in terms of sketches and photographs. The functional design of the sensing and control system is presented, and modifications which could be made to the baseline design for improved performance, in particular to incorporate a 20-mph capability, are also discussed. The vehicle system is described at the block-diagram-level of detail. Specifications and parameter values are given where available.

  11. Built-in self-repair of VLSI memories employing neural nets

    Science.gov (United States)

    Mazumder, Pinaki

    1998-10-01

    The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.

  12. Numerical analysis of electromigration in thin film VLSI interconnections

    NARCIS (Netherlands)

    Petrescu, V.; Mouthaan, A.J.; Schoenmaker, W.; Angelescu, S.; Vissarion, R.; Dima, G.; Wallinga, Hans; Profirescu, M.D.

    1995-01-01

    Due to the continuing downscaling of the dimensions in VLSI circuits, electromigration is becoming a serious reliability hazard. A software tool based on finite element analysis has been developed to solve the two partial differential equations of the two particle vacancy/imperfection model.

  13. Heavy ion tests on programmable VLSI

    International Nuclear Information System (INIS)

    Provost-Grellier, A.

    1989-11-01

    The radiation from space environment induces operation damages in onboard computers systems. The definition of a strategy, for the Very Large Scale Integrated Circuitry (VLSI) qualification and choice, is needed. The 'upset' phenomena is known to be the most critical integrated circuit radiation effect. The strategies for testing integrated circuits are reviewed. A method and a test device were developed and applied to space applications candidate circuits. Cyclotron, synchrotron and Californium source experiments were carried out [fr

  14. Automation in control laboratory and related information management system

    International Nuclear Information System (INIS)

    Gopalan, B.; Syamsundar, S.

    1997-01-01

    In the field of technology, the word automation is often employed to indicate many types of mechanized operations, though in the strict sense it means those operations which involve application of an element of knowledge or decision making without the intervention of human mind. In laboratory practice for example, the use of multi-sample array turret and millivolt recorder connected to a spectrophotometer represents a situation of mechanized operation as these gadgets help eliminating human muscle power. If a micro processor or a computer is connected to the above equipment for interpreting the measured parameters and establishing calibration graphs or display concentration results, then a real automated situation results where the application of human mind is eliminated. The state of the art of modern laboratory analysis abounds in the employment of automatic analytical equipment thanks to the development in the field of VLSI, computer, software etc. and this has given rise to the concept of laboratory automation

  15. Automated minimax design of networks

    DEFF Research Database (Denmark)

    Madsen, Kaj; Schjær-Jacobsen, Hans; Voldby, J

    1975-01-01

    A new gradient algorithm for the solution of nonlinear minimax problems has been developed. The algorithm is well suited for automated minimax design of networks and it is very simple to use. It compares favorably with recent minimax and leastpth algorithms. General convergence problems related...

  16. Automated electronic filter design

    CERN Document Server

    Banerjee, Amal

    2017-01-01

    This book describes a novel, efficient and powerful scheme for designing and evaluating the performance characteristics of any electronic filter designed with predefined specifications. The author explains techniques that enable readers to eliminate complicated manual, and thus error-prone and time-consuming, steps of traditional design techniques. The presentation includes demonstration of efficient automation, using an ANSI C language program, which accepts any filter design specification (e.g. Chebyschev low-pass filter, cut-off frequency, pass-band ripple etc.) as input and generates as output a SPICE(Simulation Program with Integrated Circuit Emphasis) format netlist. Readers then can use this netlist to run simulations with any version of the popular SPICE simulator, increasing accuracy of the final results, without violating any of the key principles of the traditional design scheme.

  17. Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1985-01-01

    The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  18. An electron undulating ring for VLSI lithography

    International Nuclear Information System (INIS)

    Tomimasu, T.; Mikado, T.; Noguchi, T.; Sugiyama, S.; Yamazaki, T.

    1985-01-01

    The development of the ETL storage ring ''TERAS'' as an undulating ring has been continued to achieve a wide area exposure of synchrotron radiation (SR) in VLSI lithography. Stable vertical and horizontal undulating motions of stored beams are demonstrated around a horizontal design orbit of TERAS, using two small steering magnets of which one is used for vertical undulating and another for horizontal one. Each steering magnet is inserted into one of the periodic configulation of guide field elements. As one of useful applications of undulaing electron beams, a vertically wide exposure of SR has been demonstrated in the SR lithography. The maximum vertical deviation from the design orbit nCcurs near the steering magnet. The maximum vertical tilt angle of the undulating beam near the nodes is about + or - 2mrad for a steering magnetic field of 50 gauss. Another proposal is for hith-intensity, uniform and wide exposure of SR from a wiggler installed in TERAS, using vertical and horizontal undulating motions of stored beams. A 1.4 m long permanent magnet wiggler has been installed for this purpose in this April

  19. Convolving optically addressed VLSI liquid crystal SLM

    Science.gov (United States)

    Jared, David A.; Stirk, Charles W.

    1994-03-01

    We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.

  20. DESIGN OF SMALL AUTOMATION WORK CELL SYSTEM DEMONSTRATIONS

    International Nuclear Information System (INIS)

    TURNER, C.; PEHL, J.

    2000-01-01

    The introduction of automation systems into many of the facilities dealing with the production, use and disposition of nuclear materials has been an ongoing objective. Many previous attempts have been made, using a variety of monolithic and, in some cases, modular technologies. Many of these attempts were less than successful, owing to the difficulty of the problem, the lack of maturity of the technology, and over optimism about the capabilities of a particular system. Consequently, it is not surprising that suggestions that automation can reduce worker Occupational Radiation Exposure (ORE) levels are often met with skepticism and caution. The development of effective demonstrations of these technologies is of vital importance if automation is to become an acceptable option for nuclear material processing environments. The University of Texas Robotics Research Group (UTRRG) has been pursuing the development of technologies to support modular small automation systems (each of less than 5 degrees-of-freedom) and the design of those systems for more than two decades. Properly designed and implemented, these technologies have a potential to reduce the worker ORE associated with work in nuclear materials processing facilities. Successful development of systems for these applications requires the development of technologies that meet the requirements of the applications. These application requirements form a general set of rules that applicable technologies and approaches need to adhere to, but in and of themselves are generally insufficient for the design of a specific automation system. For the design of an appropriate system, the associated task specifications and relationships need to be defined. These task specifications also provide a means by which appropriate technology demonstrations can be defined. Based on the requirements and specifications of the operations of the Advanced Recovery and Integrated Extraction System (ARIES) pilot line at Los Alamos National

  1. Applications of VLSI circuits to medical imaging

    International Nuclear Information System (INIS)

    O'Donnell, M.

    1988-01-01

    In this paper the application of advanced VLSI circuits to medical imaging is explored. The relationship of both general purpose signal processing chips and custom devices to medical imaging is discussed using examples of fabricated chips. In addition, advanced CAD tools for silicon compilation are presented. Devices built with these tools represent a possible alternative to custom devices and general purpose signal processors for the next generation of medical imaging systems

  2. Toward designing for trust in database automation

    Energy Technology Data Exchange (ETDEWEB)

    Duez, P. P.; Jamieson, G. A. [Cognitive Engineering Laboratory, Univ. of Toronto, 5 King' s College Rd., Toronto, Ont. M5S 3G8 (Canada)

    2006-07-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  3. Toward designing for trust in database automation

    International Nuclear Information System (INIS)

    Duez, P. P.; Jamieson, G. A.

    2006-01-01

    Appropriate reliance on system automation is imperative for safe and productive work, especially in safety-critical systems. It is unsafe to rely on automation beyond its designed use; conversely, it can be both unproductive and unsafe to manually perform tasks that are better relegated to automated tools. Operator trust in automated tools mediates reliance, and trust appears to affect how operators use technology. As automated agents become more complex, the question of trust in automation is increasingly important. In order to achieve proper use of automation, we must engender an appropriate degree of trust that is sensitive to changes in operating functions and context. In this paper, we present research concerning trust in automation in the domain of automated tools for relational databases. Lee and See have provided models of trust in automation. One model developed by Lee and See identifies three key categories of information about the automation that lie along a continuum of attributional abstraction. Purpose-, process-and performance-related information serve, both individually and through inferences between them, to describe automation in such a way as to engender r properly-calibrated trust. Thus, one can look at information from different levels of attributional abstraction as a general requirements analysis for information key to appropriate trust in automation. The model of information necessary to engender appropriate trust in automation [1] is a general one. Although it describes categories of information, it does not provide insight on how to determine the specific information elements required for a given automated tool. We have applied the Abstraction Hierarchy (AH) to this problem in the domain of relational databases. The AH serves as a formal description of the automation at several levels of abstraction, ranging from a very abstract purpose-oriented description to a more concrete description of the resources involved in the automated process

  4. Automated reasoning applications to design analysis

    International Nuclear Information System (INIS)

    Stratton, R.C.

    1984-01-01

    Given the necessary relationships and definitions of design functions and components, validation of system incarnation (the physical product of design) and sneak function analysis can be achieved via automated reasoners. The relationships and definitions must define the design specification and incarnation functionally. For the design specification, the hierarchical functional representation is based on physics and engineering principles and bounded by design objectives and constraints. The relationships and definitions of the design incarnation are manifested as element functional definitions, state relationship to functions, functional relationship to direction, element connectivity, and functional hierarchical configuration

  5. A multichip aVLSI system emulating orientation selectivity of primary visual cortical cells.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2005-07-01

    In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.

  6. Thermal battery automated assembly station conceptual design

    Energy Technology Data Exchange (ETDEWEB)

    Jacobs, D

    1988-08-01

    Thermal battery assembly involves many operations which are labor- intense. In August 1986, a project team was formed at GE Neutron Devices to investigate and evaluate more efficient and productive battery assembly techniques through the use of automation. The result of this study was the acceptance of a plan to automate the piece part pellet fabrication and battery stacking operations by using computerized pellet presses and robots which would be integrated by a main computer. This report details the conceptual design and development plan to be followed in the fabrication, development, and implementation of a thermal battery automated assembly station. 4 figs., 8 tabs.

  7. EcoDesign 2.0 - Quantitative EcoDesign within Drives and Automation Technologies

    DEFF Research Database (Denmark)

    Auer, Johannes

    with single products, eco-design of industrial automation and drive technologies has to address the key issue of the solution’s usage stage in terms of system design corresponding to the application context, where several products work in conjunction with each other. Further, in response to the above...... engineering approaches. Research focus lies in areas where these fields overlap and complement each other in the development process of given applications, in particular the development and implementation of Drives and Automation Technologies. The evaluation of the research background, based on research...... and currently implemented state-of-the-art of ecodesign of drives and automation technologies in discreet and process industries was evaluated, putting it in context to the processes and portfolio of the Siemens AG, Process Industries & Drives Division (PD), as well as current sustainability challenges...

  8. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  9. A Toolset for Supporting Iterative Human Automation: Interaction in Design

    Science.gov (United States)

    Feary, Michael S.

    2010-01-01

    The addition of automation has greatly extended humans' capability to accomplish tasks, including those that are difficult, complex and safety critical. The majority of Human - Automation Interacton (HAl) results in more efficient and safe operations, ho,,:,ever ertain unpected atomatlon behaviors or "automation surprises" can be frustrating and, In certain safety critical operations (e.g. transporttion, manufacturing control, medicine), may result in injuries or. the loss of life.. (Mellor, 1994; Leveson, 1995; FAA, 1995; BASI, 1998; Sheridan, 2002). This papr describes he development of a design tool that enables on the rapid development and evaluation. of automaton prototypes. The ultimate goal of the work is to provide a design platform upon which automation surprise vulnerability analyses can be integrated.

  10. AUTOMATION DESIGN FOR MONORAIL - BASED SYSTEM PROCESSES

    Directory of Open Access Journals (Sweden)

    Bunda BESA

    2016-12-01

    Full Text Available Currently, conventional methods of decline development put enormous cost pressure on the profitability of mining operations. This is the case with narrow vein ore bodies where current methods and mine design of decline development may be too expensive to support economic extraction of the ore. According to studies, the time it takes to drill, clean and blast an end in conventional decline development can be up to 224 minutes. This is because once an end is blasted, cleaning should first be completed before drilling can commence, resulting in low advance rates per shift. Improvements in advance rates during decline development can be achieved by application of the Electric Monorail Transport System (EMTS based drilling system. The system consists of the drilling and loading components that use monorail technology to drill and clean the face during decline development. The two systems work simultaneously at the face in such a way that as the top part of the face is being drilled the pneumatic loading system cleans the face. However, to improve the efficiency of the two systems, critical processes performed by the two systems during mining operations must be automated. Automation increases safety and productivity, reduces operator fatigue and also reduces the labour costs of the system. The aim of this paper is, therefore, to describe automation designs of the two processes performed by the monorail drilling and loading systems during operations. During automation design, critical processes performed by the two systems and control requirements necessary to allow the two systems execute such processes automatically have also been identified.

  11. Design and Implementation of Company Tailored Automated Material Handling

    DEFF Research Database (Denmark)

    Langer, Gilad; Bilberg, Arne

    1996-01-01

    This article focuses on the problems of analysing automation of material handling systems in order to develop an efficient automated solution that is specifically tailored to the company. The research has resulted in development of new methods for evaluating factory automation from design...... to implementation. The goals of the research were to analyse and evaluate automation in order to obtain an advantageous combination of human and automated resources. The idea is to asses different solutions in a virtual environment, where experiments and analyses can be performed so that the company can justify...... for their application with computer aided information processing tools. The framework is named the "Automated Material Handling (AMH) Preference GuideLine". The research has been carried out in close co-operation with Danish and European industry, where implementations of automation can be referred to. It is our...

  12. Preface to the special section on human factors and automation in vehicles: designing highly automated vehicles with the driver in mind.

    Science.gov (United States)

    Merat, Natasha; Lee, John D

    2012-10-01

    This special section brings together diverse research regarding driver interaction with advanced automotive technology to guide design of increasingly automated vehicles. Rapidly evolving vehicle automation will likely change cars and trucks more in the next 5 years than the preceding 50, radically redefining what it means to drive. This special section includes 10 articles from European and North American researchers reporting simulator and naturalistic driving studies. Little research has considered the consequences of fully automated driving, with most focusing on lane-keeping and speed control systems individually. The studies reveal two underlying design philosophies: automate driving versus support driving. Results of several studies, consistent with previous research in other domains, suggest that the automate philosophy can delay driver responses to incidents in which the driver has to intervene and take control from the automation. Understanding how to orchestrate the transfer or sharing of control between the system and the driver, particularly in critical incidents, emerges as a central challenge. Designers should not assume that automation can substitute seamlessly for a human driver, nor can they assume that the driver can safely accommodate the limitations of automation. Designers, policy makers, and researchers must give careful consideration to what role the person should have in highly automated vehicles and how to support the driver if the driver is to be responsible for vehicle control. As in other domains, driving safety increasingly depends on the combined performance of the human and automation, and successful designs will depend on recognizing and supporting the new roles of the driver.

  13. Automating expert role to determine design concept in Kansei Engineering

    Science.gov (United States)

    Lokman, Anitawati Mohd; Haron, Mohammad Bakri Che; Abidin, Siti Zaleha Zainal; Khalid, Noor Elaiza Abd

    2016-02-01

    Affect has become imperative in product quality. In affective design field, Kansei Engineering (KE) has been recognized as a technology that enables discovery of consumer's emotion and formulation of guide to design products that win consumers in the competitive market. Albeit powerful technology, there is no rule of thumb in its analysis and interpretation process. KE expertise is required to determine sets of related Kansei and the significant concept of emotion. Many research endeavors become handicapped with the limited number of available and accessible KE experts. This work is performed to simulate the role of experts with the use of Natphoric algorithm thus providing sound solution to the complexity and flexibility in KE. The algorithm is designed to learn the process by implementing training datasets taken from previous KE research works. A framework for automated KE is then designed to realize the development of automated KE system. A comparative analysis is performed to determine feasibility of the developed prototype to automate the process. The result shows that the significant Kansei is determined by manual KE implementation and the automated process is highly similar. KE research advocates will benefit this system to automatically determine significant design concepts.

  14. Automating Relational Database Design for Microcomputer Users.

    Science.gov (United States)

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  15. Design, Automation, and Test in Europe

    DEFF Research Database (Denmark)

    Systems in CMOS and Beyond; - Physical Design and Validation; - Test and Verification. The winners of the prestigious EDAA Lifetime Achievement Award as well as oher recognized experts in their field wrote an introduction to each section, summarizing the history in their domain and indicating how......The Design, Automation, and Test in Europe (DATE) conference celebrated in 2007 its tenth anniversary. As a tribute to the chip and system-level design and design technology community, this book presents a compilation of the three most influential papers of each year. This provides an excellent...

  16. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  17. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    Science.gov (United States)

    Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.

    1987-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.

  18. New domain for image analysis: VLSI circuits testing, with Romuald, specialized in parallel image processing

    Energy Technology Data Exchange (ETDEWEB)

    Rubat Du Merac, C; Jutier, P; Laurent, J; Courtois, B

    1983-07-01

    This paper describes some aspects of specifying, designing and evaluating a specialized machine, Romuald, for the capture, coding, and processing of video and scanning electron microscope (SEM) pictures. First the authors present the functional organization of the process unit of romuald and its hardware, giving details of its behaviour. Then they study the capture and display unit which, thanks to its flexibility, enables SEM images coding. Finally, they describe an application which is now being developed in their laboratory: testing VLSI circuits with new methods: sem+voltage contrast and image processing. 15 references.

  19. TARDIS: An Automation Framework for JPL Mission Design and Navigation

    Science.gov (United States)

    Roundhill, Ian M.; Kelly, Richard M.

    2014-01-01

    Mission Design and Navigation at the Jet Propulsion Laboratory has implemented an automation framework tool to assist in orbit determination and maneuver design analysis. This paper describes the lessons learned from previous automation tools and how they have been implemented in this tool. In addition this tool has revealed challenges in software implementation, testing, and user education. This paper describes some of these challenges and invites others to share their experiences.

  20. An area-efficient topology for VLSI implementation of Viterbi decoders and other shuffle-exchange type structures

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1991-01-01

    A topology for single-chip implementation of computing structures based on shuffle-exchange (SE)-type interconnection networks is presented. The topology is suited for structures with a small number of processing elements (i.e. 32-128) whose area cannot be neglected compared to the area required....... The topology has been used in a VLSI implementation of the add-compare-select (ACS) module of a fully parallel K=7, R=1/2 Viterbi decoder. Both the floor-planning issues and some of the important algorithm and circuit-level aspects of this design are discussed. The chip has been designed and fabricated in a 2....... The interconnection network occupies 32% of the area.>...

  1. Automation of the aircraft design process

    Science.gov (United States)

    Heldenfels, R. R.

    1974-01-01

    The increasing use of the computer to automate the aerospace product development and engineering process is examined with emphasis on structural analysis and design. Examples of systems of computer programs in aerospace and other industries are reviewed and related to the characteristics of aircraft design in its conceptual, preliminary, and detailed phases. Problems with current procedures are identified, and potential improvements from optimum utilization of integrated disciplinary computer programs by a man/computer team are indicated.

  2. System of automated map design

    International Nuclear Information System (INIS)

    Ponomarjov, S.Yu.; Rybalko, S.I.; Proskura, N.I.

    1992-01-01

    Preprint 'System of automated map design' contains information about the program shell for construction of territory map, performing level line drawing of arbitrary two-dimension field (in particular, the radionuclide concentration field). The work schedule and data structures are supplied, as well as data on system performance. The preprint can become useful for experts in radioecology and for all persons involved in territory pollution mapping or multi-purpose geochemical mapping. (author)

  3. Driving a car with custom-designed fuzzy inferencing VLSI chips and boards

    Science.gov (United States)

    Pin, Francois G.; Watanabe, Yutaka

    1993-01-01

    Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human

  4. Design of automation tools for management of descent traffic

    Science.gov (United States)

    Erzberger, Heinz; Nedell, William

    1988-01-01

    The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational

  5. Review of Automated Design and Optimization of MEMS

    DEFF Research Database (Denmark)

    Achiche, Sofiane; Fan, Zhun; Bolognini, Francesca

    2007-01-01

    carried out. This paper presents a review of these techniques. The design task of MEMS is usually divided into four main stages: System Level, Device Level, Physical Level and the Process Level. The state of the art o automated MEMS design in each of these levels is investigated....

  6. Automated platform for designing multiple robot work cells

    Science.gov (United States)

    Osman, N. S.; Rahman, M. A. A.; Rahman, A. A. Abdul; Kamsani, S. H.; Bali Mohamad, B. M.; Mohamad, E.; Zaini, Z. A.; Rahman, M. F. Ab; Mohamad Hatta, M. N. H.

    2017-06-01

    Designing the multiple robot work cells is very knowledge-intensive, intricate, and time-consuming process. This paper elaborates the development process of a computer-aided design program for generating the multiple robot work cells which offer a user-friendly interface. The primary purpose of this work is to provide a fast and easy platform for less cost and human involvement with minimum trial and errors adjustments. The automated platform is constructed based on the variant-shaped configuration concept with its mathematical model. A robot work cell layout, system components, and construction procedure of the automated platform are discussed in this paper where integration of these items will be able to automatically provide the optimum robot work cell design according to the information set by the user. This system is implemented on top of CATIA V5 software and utilises its Part Design, Assembly Design, and Macro tool. The current outcomes of this work provide a basis for future investigation in developing a flexible configuration system for the multiple robot work cells.

  7. Ant System-Corner Insertion Sequence: An Efficient VLSI Hard Module Placer

    Directory of Open Access Journals (Sweden)

    HOO, C.-S.

    2013-02-01

    Full Text Available Placement is important in VLSI physical design as it determines the time-to-market and chip's reliability. In this paper, a new floorplan representation which couples with Ant System, namely Corner Insertion Sequence (CIS is proposed. Though CIS's search complexity is smaller than the state-of-the-art representation Corner Sequence (CS, CIS adopts a preset boundary on the placement and hence, leading to search bound similar to CS. This enables the previous unutilized corner edges to become viable. Also, the redundancy of CS representation is eliminated in CIS leads to a lower search complexity of CIS. Experimental results on Microelectronics Center of North Carolina (MCNC hard block benchmark circuits show that the proposed algorithm performs comparably in terms of area yet at least two times faster than CS.

  8. Optimizing human-system interface automation design based on a skill-rule-knowledge framework

    International Nuclear Information System (INIS)

    Lin, Chiuhsiang Joe; Yenn, T.-C.; Yang, C.-W.

    2010-01-01

    This study considers the technological change that has occurred in complex systems within the past 30 years. The role of human operators in controlling and interacting with complex systems following the technological change was also investigated. Modernization of instrumentation and control systems and components leads to a new issue of human-automation interaction, in which human operational performance must be considered in automated systems. The human-automation interaction can differ in its types and levels. A system design issue is usually realized: given these technical capabilities, which system functions should be automated and to what extent? A good automation design can be achieved by making an appropriate human-automation function allocation. To our knowledge, only a few studies have been published on how to achieve appropriate automation design with a systematic procedure. Further, there is a surprising lack of information on examining and validating the influences of levels of automation (LOAs) on instrumentation and control systems in the advanced control room (ACR). The study we present in this paper proposed a systematic framework to help in making an appropriate decision towards types of automation (TOA) and LOAs based on a 'Skill-Rule-Knowledge' (SRK) model. From the evaluating results, it was shown that the use of either automatic mode or semiautomatic mode is insufficient to prevent human errors. For preventing the occurrences of human errors and ensuring the safety in ACR, the proposed framework can be valuable for making decisions in human-automation allocation.

  9. Drift chamber tracking with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-10-01

    We have tested a commercial analog VLSI neural network chip for finding in real time the intercept and slope of charged particles traversing a drift chamber. Voltages proportional to the drift times were input to the Intel ETANN chip and the outputs were recorded and later compared off line to conventional track fits. We will discuss the chamber and test setup, the chip specifications, and results of recent tests. We'll briefly discuss possible applications in high energy physics detector triggers

  10. Proceedings of Design, Automation and Test in Europe (DATE07)

    DEFF Research Database (Denmark)

    Welcome to the DATE 07 Conference Proceedings. DATE combines the world’s leading electronic systems design conference and Europe's leading international exhibition for electronic design, automation and test, from system level hardware and software implementation right down to integrated circuit...... with 78 sessions covering the latest in system design and embedded software, IC design methodologies and EDA tool developments. One of the main strengths of the conference is a wide but high-quality coverage of design, design automation and test topics, from the system level (including PCB and FPGA......) to the integrated circuit level. In addition, for the third year a special embedded software track is offered to allow for the increasing importance of software in embedded systems. Compared with previous years, submissions in design, test and embedded software have grown significantly, showing a clear trend toward...

  11. Conceptual designs of automated systems for underground emplacement and retrieval of nuclear waste

    International Nuclear Information System (INIS)

    Slocum, A.H.; Hou, W.M.; Park, K.; Hochmuth, C.; Thurston, D.C.

    1987-01-01

    Current designs of underground nuclear waste repositories have not adequately addressed the possibility of automated, unmanned emplacement and retrieval. This report will present design methodologies for development of an automated system for underground emplacement of nuclear waste. By scaling generic issues to different repositories, it is shown that a two vehicle automated waste emplacement/retrieval system can be designed to operate in a fail-safe mode. Evaluation of cost at this time is not possible. Significant gains in worker safety, however, can be realized by minimizing the possibility of human exposure

  12. A Review Of Design And Control Of Automated Guided Vehicle Systems

    OpenAIRE

    Le-Anh, Tuan; Koster, René

    2004-01-01

    textabstractThis paper presents a review on design and control of automated guided vehicle systems. We address most key related issues including guide-path design, estimating the number of vehicles, vehicle scheduling, idle-vehicle positioning, battery management, vehicle routing, and conflict resolution. We discuss and classify important models and results from key publications in literature on automated guided vehicle systems, including often-neglected areas, such as idle-vehicle positionin...

  13. High-energy heavy ion testing of VLSI devices for single event ...

    Indian Academy of Sciences (India)

    Unknown

    per describes the high-energy heavy ion radiation testing of VLSI devices for single event upset (SEU) ... The experimental set up employed to produce low flux of heavy ions viz. silicon ... through which they pass, leaving behind a wake of elec- ... for use in Bus Management Unit (BMU) and bulk CMOS ... was scheduled.

  14. Automated Design Tools for Integrated Mixed-Signal Microsystems (NeoCAD)

    National Research Council Canada - National Science Library

    Petre, P; Visher, J; Shringarpure, R; Valley, F; Swaminathan, M

    2005-01-01

    Automated design tools and integrated design flow methodologies were developed that demonstrated more than an order- of-magnitude reduction in cycle time and cost for mixed signal (digital/analoglRF...

  15. Adaptive Automation Design and Implementation

    Science.gov (United States)

    2015-09-17

    with an automated system to a real-world adaptive au- tomation system implementation. There have been plenty of adaptive automation 17 Adaptive...of systems without increasing manpower requirements by allocating routine tasks to automated aids, improving safety through the use of au- tomated ...between intermediate levels of au- tomation , explicitly defining which human task a given level automates. Each model aids the creation and classification

  16. Development of design principles for automated systems in transport control.

    Science.gov (United States)

    Balfe, Nora; Wilson, John R; Sharples, Sarah; Clarke, Theresa

    2012-01-01

    This article reports the results of a qualitative study investigating attitudes towards and opinions of an advanced automation system currently used in UK rail signalling. In-depth interviews were held with 10 users, key issues associated with automation were identified and the automation's impact on the signalling task investigated. The interview data highlighted the importance of the signallers' understanding of the automation and their (in)ability to predict its outputs. The interviews also covered the methods used by signallers to interact with and control the automation, and the perceived effects on their workload. The results indicate that despite a generally low level of understanding and ability to predict the actions of the automation system, signallers have developed largely successful coping mechanisms that enable them to use the technology effectively. These findings, along with parallel work identifying desirable attributes of automation from the literature in the area, were used to develop 12 principles of automation which can be used to help design new systems which better facilitate cooperative working. The work reported in this article was completed with the active involvement of operational rail staff who regularly use automated systems in rail signalling. The outcomes are currently being used to inform decisions on the extent and type of automation and user interfaces in future generations of rail control systems.

  17. Design and Implementation of a Sort-Free K-Best Sphere Decoder

    KAUST Repository

    Mondal, Sudip

    2012-10-18

    This paper describes the design and VLSI architecture for a 4x4 breadth first K-Best MIMO decoder using a 64 QAM scheme. A novel sort free approach to path extension, as well as quantized metrics result in a high throughput VLSI architecture with lower power and area consumption compared to state of the art published systems. Functionality is confirmed via an FPGA implementation on a Xilinx Virtex II Pro FPGA. Comparison of simulation and measurements are given and FPGA utilization figures are provided. Finally, VLSI architectural tradeoffs are explored for a synthesized ASIC implementation in a 65nm CMOS technology.

  18. Simple heuristics: A bridge between manual core design and automated optimization methods

    International Nuclear Information System (INIS)

    White, J.R.; Delmolino, P.M.

    1993-01-01

    The primary function of RESCUE is to serve as an aid in the analysis and identification of feasible loading patterns for LWR reload cores. The unique feature of RESCUE is that its physics model is based on some recent advances in generalized perturbation theory (GPT) methods. The high order GPT techniques offer the accuracy, computational efficiency, and flexibility needed for the implementation of a full range of capabilities within a set of compatible interactive (manual and semi-automated) and automated design tools. The basic design philosophy and current features within RESCUE are reviewed, and the new semi-automated capability is highlighted. The online advisor facility appears quite promising and it provides a natural bridge between the traditional trial-and-error manual process and the recent progress towards fully automated optimization sequences. (orig.)

  19. Design and implementation of tutorials for PLC B&R Automation

    OpenAIRE

    Sýkora, Daniel

    2014-01-01

    Tato bakalářská práce se zabývá návrhem a realizací úloh pro PLC B&R a podrobným návodem práce v Automation Studio. Obsahuje také informace o firmě B&R Automation Ges. m. b. H. a jejích produktech. Návrh úloh probíhal ve vývojovém prostředí B&R Automation Studio. Praktická úloha na regulaci teploty byla řešena a odzkoušena na B&R PLC X20. This bachelor’s thesis deals with the design and the implementation tutorials for PLC B&R as well as the detailed instructions of work in the Automation ...

  20. Design automation of switching mode high voltage power supply for nuclear instruments

    International Nuclear Information System (INIS)

    El-araby, S.M.S.

    1999-01-01

    This paper presents an automation procedure for the design of switching mode high voltage power supplies, using Pc programming facility. The procedure permits the selection of a ready made or designed ferrite transformer. This selection could be achieved according to the designer desire; as the program includes complete information about ready made ferrite transformer through complete database. The procedure is based on suggested template circuit. Micro-Cap IV simulation package is used to verify the desired high voltage power supply design. Simulation results agree quite well with suggested procedure's results. Design aspects and development needed to increase automation capabilities are also discussed

  1. A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station

    Science.gov (United States)

    Kwatra, S. C.; King, Brent

    1995-01-01

    This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.

  2. Monolithic active pixel sensors (MAPS) in a VLSI CMOS technology

    CERN Document Server

    Turchetta, R; Manolopoulos, S; Tyndel, M; Allport, P P; Bates, R; O'Shea, V; Hall, G; Raymond, M

    2003-01-01

    Monolithic Active Pixel Sensors (MAPS) designed in a standard VLSI CMOS technology have recently been proposed as a compact pixel detector for the detection of high-energy charged particle in vertex/tracking applications. MAPS, also named CMOS sensors, are already extensively used in visible light applications. With respect to other competing imaging technologies, CMOS sensors have several potential advantages in terms of low cost, low power, lower noise at higher speed, random access of pixels which allows windowing of region of interest, ability to integrate several functions on the same chip. This brings altogether to the concept of 'camera-on-a-chip'. In this paper, we review the use of CMOS sensors for particle physics and we analyse their performances in term of the efficiency (fill factor), signal generation, noise, readout speed and sensor area. In most of high-energy physics applications, data reduction is needed in the sensor at an early stage of the data processing before transfer of the data to ta...

  3. High performance VLSI telemetry data systems

    Science.gov (United States)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  4. PROJECT ENGINEERING DATA MANAGEMENT AT AUTOMATED PREPARATION OF DESIGN DOCUMENTATION

    Directory of Open Access Journals (Sweden)

    A. V. Guryanov

    2017-01-01

    Full Text Available We have developed and realized instrumental means for automated support of end-to-end design process for design documentation on a product at the programming level. The proposed decision is based on processing of the engineering project data that are contained in interdependent design documents: tactical technical characteristics of products, data on the valuable metals contained in them, the list of components applied in a product and others. Processing of engineering data is based on their conversion to the form provided by requirements of industry standards for design documentation preparation. The general graph of the design documentation developed on a product is provided. The description of the developed software product is given. Automated preparation process of interdependent design documents is shown on the example of preparation of purchased products list. Results of work can be used in case of research and development activities on creation of perspective samples of ADP equipment.

  5. Home Automation : Smart home technology and template house design

    OpenAIRE

    Zheng, Zeya

    2013-01-01

    In this thesis, home automation’s general knowledge, technology information and each component will be introduced to the reader in the first half of the whole thesis. In the second half, thesis includes the Home Automation template design and market competitiveness analysis. The author assumes that the reader is going to spend lots of money to have a smart home. In this situation, the author introduces the home automation to the reader at each component. So the reader in this thesis actu...

  6. An automated approach to the design of decision tree classifiers

    Science.gov (United States)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  7. VLSI Architecture and Design

    OpenAIRE

    Johnsson, Lennart

    1980-01-01

    Integrated circuit technology is rapidly approaching a state where feature sizes of one micron or less are tractable. Chip sizes are increasing slowly. These two developments result in considerably increased complexity in chip design. The physical characteristics of integrated circuit technology are also changing. The cost of communication will be dominating making new architectures and algorithms both feasible and desirable. A large number of processors on a single chip will be possible....

  8. Design and control of automated guided vehicle systems: A case study

    NARCIS (Netherlands)

    Li, Q.; Adriaansen, A.C.; Udding, J.T.; Pogromski, A.Y.

    2011-01-01

    In this paper, we study the design and control of automated guided vehicle (AGV) systems, with the focus on the quayside container transport in an automated container terminal. We first set up an event-driven model for an AGV system in the zone control framework. Then a number of layouts of the road

  9. VLSI Technology for Cognitive Radio

    Science.gov (United States)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  10. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    Science.gov (United States)

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  11. VLSI Implementation of a Fixed-Complexity Soft-Output MIMO Detector for High-Speed Wireless

    Directory of Open Access Journals (Sweden)

    Di Wu

    2010-01-01

    Full Text Available This paper presents a low-complexity MIMO symbol detector with close-Maximum a posteriori performance for the emerging multiantenna enhanced high-speed wireless communications. The VLSI implementation is based on a novel MIMO detection algorithm called Modified Fixed-Complexity Soft-Output (MFCSO detection, which achieves a good trade-off between performance and implementation cost compared to the referenced prior art. By including a microcode-controlled channel preprocessing unit and a pipelined detection unit, it is flexible enough to cover several different standards and transmission schemes. The flexibility allows adaptive detection to minimize power consumption without degradation in throughput. The VLSI implementation of the detector is presented to show that real-time MIMO symbol detection of 20 MHz bandwidth 3GPP LTE and 10 MHz WiMAX downlink physical channel is achievable at reasonable silicon cost.

  12. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    International Nuclear Information System (INIS)

    Jian Haifang; Shi Yin

    2009-01-01

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  13. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    Energy Technology Data Exchange (ETDEWEB)

    Jian Haifang; Shi Yin, E-mail: jhf@semi.ac.c [Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China)

    2009-07-15

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  14. GENIUS : An integrated environment for supporting the design of generic automated negotiators

    NARCIS (Netherlands)

    Lin, R.; Kraus, S.; Baarslag, T.; Tykhonov, D.; Hindriks, K.; Jonker, C.M.

    2012-01-01

    The design of automated negotiators has been the focus of abundant research in recent years. However, due to difficulties involved in creating generalized agents that can negotiate in several domains and against human counterparts, many automated negotiators are domain specific and their behavior

  15. Analog VLSI Models of Range-Tuned Neurons in the Bat Echolocation System

    Directory of Open Access Journals (Sweden)

    Horiuchi Timothy

    2003-01-01

    Full Text Available Bat echolocation is a fascinating topic of research for both neuroscientists and engineers, due to the complex and extremely time-constrained nature of the problem and its potential for application to engineered systems. In the bat's brainstem and midbrain exist neural circuits that are sensitive to the specific difference in time between the outgoing sonar vocalization and the returning echo. While some of the details of the neural mechanisms are known to be species-specific, a basic model of reafference-triggered, postinhibitory rebound timing is reasonably well supported by available data. We have designed low-power, analog VLSI circuits to mimic this mechanism and have demonstrated range-dependent outputs for use in a real-time sonar system. These circuits are being used to implement range-dependent vocalization amplitude, vocalization rate, and closest target isolation.

  16. Automating risk analysis of software design models.

    Science.gov (United States)

    Frydman, Maxime; Ruiz, Guifré; Heymann, Elisa; César, Eduardo; Miller, Barton P

    2014-01-01

    The growth of the internet and networked systems has exposed software to an increased amount of security threats. One of the responses from software developers to these threats is the introduction of security activities in the software development lifecycle. This paper describes an approach to reduce the need for costly human expertise to perform risk analysis in software, which is common in secure development methodologies, by automating threat modeling. Reducing the dependency on security experts aims at reducing the cost of secure development by allowing non-security-aware developers to apply secure development with little to no additional cost, making secure development more accessible. To automate threat modeling two data structures are introduced, identification trees and mitigation trees, to identify threats in software designs and advise mitigation techniques, while taking into account specification requirements and cost concerns. These are the components of our model for automated threat modeling, AutSEC. We validated AutSEC by implementing it in a tool based on data flow diagrams, from the Microsoft security development methodology, and applying it to VOMS, a grid middleware component, to evaluate our model's performance.

  17. The Automation Control System Design of Walking Beam Heating Furnace

    OpenAIRE

    Hong-Yu LIU; Jun-Qing LIU; Jun-Jie XI

    2014-01-01

    Combining the transformation project of certain strip steel rolling production line, the techniques process of walking beam heating furnace was elaborated in this paper. The practical application of LOS-T18-2ZC1 laser detector was elaborated. The network communication model of walking beam heating furnace control system was designed. The realization method of production process automation control was elaborated. The entire automation control system allocation picture and PLC power distributio...

  18. Fuel lattice design in a boiling water reactor using a knowledge-based automation system

    International Nuclear Information System (INIS)

    Tung, Wu-Hsiung; Lee, Tien-Tso; Kuo, Weng-Sheng; Yaur, Shung-Jung

    2015-01-01

    Highlights: • An automation system was developed for the fuel lattice radial design of BWRs. • An enrichment group peaking equalizing method is applied to optimize the design. • Several heuristic rules and restrictions are incorporated to facilitate the design. • The CPU time for the system to design a 10x10 lattice was less than 1.2 h. • The beginning-of-life LPF was improved from 1.319 to 1.272 for one of the cases. - Abstract: A knowledge-based fuel lattice design automation system for BWRs is developed and applied to the design of 10 × 10 fuel lattices. The knowledge implemented in this fuel lattice design automation system includes the determination of gadolinium fuel pin location, the determination of fuel pin enrichment and enrichment distribution. The optimization process starts by determining the gadolinium distribution based on the pin power distribution of a flat enrichment lattice and some heuristic rules. Next, a pin power distribution flattening and an enrichment grouping process are introduced to determine the enrichment of each fuel pin enrichment type and the initial enrichment distribution of a fuel lattice design. Finally, enrichment group peaking equalizing processes are performed to achieve lower lattice peaking. Several fuel lattice design constraints are also incorporated in the automation system such that the system can accomplish a design which meets the requirements of practical use. Depending on the axial position of the lattice, a different method is applied in the design of the fuel lattice. Two typical fuel lattices with U"2"3"5 enrichment of 4.471% and 4.386% were taken as references. Application of the method demonstrates that improved lattice designs can be achieved through the enrichment grouping and the enrichment group peaking equalizing method. It takes about 11 min and 1 h 11 min of CPU time for the automation system to accomplish two design cases on an HP-8000 workstation, including the execution of CASMO-4 lattice

  19. Fuel lattice design in a boiling water reactor using a knowledge-based automation system

    Energy Technology Data Exchange (ETDEWEB)

    Tung, Wu-Hsiung, E-mail: wstong@iner.gov.tw; Lee, Tien-Tso; Kuo, Weng-Sheng; Yaur, Shung-Jung

    2015-11-15

    Highlights: • An automation system was developed for the fuel lattice radial design of BWRs. • An enrichment group peaking equalizing method is applied to optimize the design. • Several heuristic rules and restrictions are incorporated to facilitate the design. • The CPU time for the system to design a 10x10 lattice was less than 1.2 h. • The beginning-of-life LPF was improved from 1.319 to 1.272 for one of the cases. - Abstract: A knowledge-based fuel lattice design automation system for BWRs is developed and applied to the design of 10 × 10 fuel lattices. The knowledge implemented in this fuel lattice design automation system includes the determination of gadolinium fuel pin location, the determination of fuel pin enrichment and enrichment distribution. The optimization process starts by determining the gadolinium distribution based on the pin power distribution of a flat enrichment lattice and some heuristic rules. Next, a pin power distribution flattening and an enrichment grouping process are introduced to determine the enrichment of each fuel pin enrichment type and the initial enrichment distribution of a fuel lattice design. Finally, enrichment group peaking equalizing processes are performed to achieve lower lattice peaking. Several fuel lattice design constraints are also incorporated in the automation system such that the system can accomplish a design which meets the requirements of practical use. Depending on the axial position of the lattice, a different method is applied in the design of the fuel lattice. Two typical fuel lattices with U{sup 235} enrichment of 4.471% and 4.386% were taken as references. Application of the method demonstrates that improved lattice designs can be achieved through the enrichment grouping and the enrichment group peaking equalizing method. It takes about 11 min and 1 h 11 min of CPU time for the automation system to accomplish two design cases on an HP-8000 workstation, including the execution of CASMO-4

  20. An area-efficient path memory structure for VLSI Implementation of high speed Viterbi decoders

    DEFF Research Database (Denmark)

    Paaske, Erik; Pedersen, Steen; Sparsø, Jens

    1991-01-01

    Path storage and selection methods for Viterbi decoders are investigated with special emphasis on VLSI implementations. Two well-known algorithms, the register exchange, algorithm, REA, and the trace back algorithm, TBA, are considered. The REA requires the smallest number of storage elements...

  1. First results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Anzivino, G.; Horisberger, R.; Hubbeling, L.; Hyams, B.; Parker, S.; Breakstone, A.; Litke, A.M.; Walker, J.T.; Bingefors, N.

    1986-01-01

    A 256-strip silicon detector with 25 μm strip pitch, connected to two 128-channel NMOS VLSI chips (Microplex), has been tested using straight-through tracks from a ruthenium beta source. The readout channels have a pitch of 47.5 μm. A single multiplexed output provides voltages proportional to the integrated charge from each strip. The most probable signal height from the beta traversals is approximately 14 times the rms noise in any single channel. (orig.)

  2. Some Challenges in the Design of Human-Automation Interaction for Safety-Critical Systems

    Science.gov (United States)

    Feary, Michael S.; Roth, Emilie

    2014-01-01

    Increasing amounts of automation are being introduced to safety-critical domains. While the introduction of automation has led to an overall increase in reliability and improved safety, it has also introduced a class of failure modes, and new challenges in risk assessment for the new systems, particularly in the assessment of rare events resulting from complex inter-related factors. Designing successful human-automation systems is challenging, and the challenges go beyond good interface development (e.g., Roth, Malin, & Schreckenghost 1997; Christoffersen & Woods, 2002). Human-automation design is particularly challenging when the underlying automation technology generates behavior that is difficult for the user to anticipate or understand. These challenges have been recognized in several safety-critical domains, and have resulted in increased efforts to develop training, procedures, regulations and guidance material (CAST, 2008, IAEA, 2001, FAA, 2013, ICAO, 2012). This paper points to the continuing need for new methods to describe and characterize the operational environment within which new automation concepts are being presented. We will describe challenges to the successful development and evaluation of human-automation systems in safety-critical domains, and describe some approaches that could be used to address these challenges. We will draw from experience with the aviation, spaceflight and nuclear power domains.

  3. Designing automation for complex work environments under different levels of stress.

    Science.gov (United States)

    Sauer, Juergen; Nickel, Peter; Wastell, David

    2013-01-01

    This article examines the effectiveness of different forms of static and adaptable automation under low- and high-stress conditions. Forty participants were randomly assigned to one of four experimental conditions, comparing three levels of static automation (low, medium and high) and one level of adaptable automation, with the environmental stressor (noise) being varied as a within-subjects variable. Participants were trained for 4 h on a simulation of a process control environment, called AutoCAMS, followed by a 2.5-h testing session. Measures of performance, psychophysiology and subjective reactions were taken. The results showed that operators preferred higher levels of automation under noise than under quiet conditions. A number of parameters indicated negative effects of noise exposure, such as performance impairments, physiological stress reactions and higher mental workload. It also emerged that adaptable automation provided advantages over low and intermediate static automation, with regard to mental workload, effort expenditure and diagnostic performance. The article concludes that for the design of automation a wider range of operational scenarios reflecting adverse as well as ideal working conditions needs to be considered. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  4. An automated methodology development. [software design for combat simulation

    Science.gov (United States)

    Hawley, L. R.

    1985-01-01

    The design methodology employed in testing the applicability of Ada in large-scale combat simulations is described. Ada was considered as a substitute for FORTRAN to lower life cycle costs and ease the program development efforts. An object-oriented approach was taken, which featured definitions of military targets, the capability of manipulating their condition in real-time, and one-to-one correlation between the object states and real world states. The simulation design process was automated by the problem statement language (PSL)/problem statement analyzer (PSA). The PSL/PSA system accessed the problem data base directly to enhance the code efficiency by, e.g., eliminating non-used subroutines, and provided for automated report generation, besides allowing for functional and interface descriptions. The ways in which the methodology satisfied the responsiveness, reliability, transportability, modifiability, timeliness and efficiency goals are discussed.

  5. [Research and Design of a System for Detecting Automated External Defbrillator Performance Parameters].

    Science.gov (United States)

    Wang, Kewu; Xiao, Shengxiang; Jiang, Lina; Hu, Jingkai

    2017-09-30

    In order to regularly detect the performance parameters of automated external defibrillator (AED), to make sure it is safe before using the instrument, research and design of a system for detecting automated external defibrillator performance parameters. According to the research of the characteristics of its performance parameters, combing the STM32's stability and high speed with PWM modulation control, the system produces a variety of ECG normal and abnormal signals through the digital sampling methods. Completed the design of the hardware and software, formed a prototype. This system can accurate detect automated external defibrillator discharge energy, synchronous defibrillation time, charging time and other key performance parameters.

  6. Application of a path sensitizing method on automated generation of test specifications for control software

    International Nuclear Information System (INIS)

    Morimoto, Yuuichi; Fukuda, Mitsuko

    1995-01-01

    An automated generation method for test specifications has been developed for sequential control software in plant control equipment. Sequential control software can be represented as sequential circuits. The control software implemented in a control equipment is designed from these circuit diagrams. In logic tests of VLSI's, path sensitizing methods are widely used to generate test specifications. But the method generates test specifications at a single time only, and can not be directly applied to sequential control software. The basic idea of the proposed method is as follows. Specifications of each logic operator in the diagrams are defined in the software design process. Therefore, test specifications of each operator in the control software can be determined from these specifications, and validity of software can be judged by inspecting all of the operators in the logic circuit diagrams. Candidates for sensitized paths, on which test data for each operator propagates, can be generated by the path sensitizing method. To confirm feasibility of the method, it was experimentally applied to control software in digital control equipment. The program could generate test specifications exactly, and feasibility of the method was confirmed. (orig.) (3 refs., 7 figs.)

  7. On the engineering design for systematic integration of agent-orientation in industrial automation.

    Science.gov (United States)

    Yu, Liyong; Schüller, Andreas; Epple, Ulrich

    2014-09-01

    In today's automation industry, agent-oriented development of system functionalities appears to have a great potential for increasing autonomy and flexibility of complex operations, while lowering the workload of users. In this paper, we present a reference model for the harmonious and systematical integration of agent-orientation in industrial automation. Considering compatibility with existing automation systems and best practice, this model combines advantages of function block technology, service orientation and native description methods from the automation standard IEC 61131-3. This approach can be applied as a guideline for the engineering design of future agent-oriented automation systems. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  8. A novel low-voltage low-power analogue VLSI implementation of neural networks with on-chip back-propagation learning

    Science.gov (United States)

    Carrasco, Manuel; Garde, Andres; Murillo, Pilar; Serrano, Luis

    2005-06-01

    In this paper a novel design and implementation of a VLSI Analogue Neural Net based on Multi-Layer Perceptron (MLP) with on-chip Back Propagation (BP) learning algorithm suitable for the resolution of classification problems is described. In order to implement a general and programmable analogue architecture, the design has been carried out in a hierarchical way. In this way the net has been divided in synapsis-blocks and neuron-blocks providing an easy method for the analysis. These blocks basically consist on simple cells, which are mainly, the activation functions (NAF), derivatives (DNAF), multipliers and weight update circuits. The analogue design is based on current-mode translinear techniques using MOS transistors working in the weak inversion region in order to reduce both the voltage supply and the power consumption. Moreover, with the purpose of minimizing the noise, offset and distortion of even order, the topologies are fully-differential and balanced. The circuit, named ANNE (Analogue Neural NEt), has been prototyped and characterized as a proof of concept on CMOS AMI-0.5A technology occupying a total area of 2.7mm2. The chip includes two versions of neural nets with on-chip BP learning algorithm, which are respectively a 2-1 and a 2-2-1 implementations. The proposed nets have been experimentally tested using supply voltages from 2.5V to 1.8V, which is suitable for single cell lithium-ion battery supply applications. Experimental results of both implementations included in ANNE exhibit a good performance on solving classification problems. These results have been compared with other proposed Analogue VLSI implementations of Neural Nets published in the literature demonstrating that our proposal is very efficient in terms of occupied area and power consumption.

  9. Crew aiding and automation: A system concept for terminal area operations, and guidelines for automation design

    Science.gov (United States)

    Dwyer, John P.

    1994-01-01

    This research and development program comprised two efforts: the development of guidelines for the design of automated systems, with particular emphasis on automation design that takes advantage of contextual information, and the concept-level design of a crew aiding system, the Terminal Area Navigation Decision Aiding Mediator (TANDAM). This concept outlines a system capable of organizing navigation and communication information and assisting the crew in executing the operations required in descent and approach. In service of this endeavor, problem definition activities were conducted that identified terminal area navigation and operational familiarization exercises addressing the terminal area navigation problem. Both airborne and ground-based (ATC) elements of aircraft control were extensively researched. The TANDAM system concept was then specified, and the crew interface and associated systems described. Additionally, three descent and approach scenarios were devised in order to illustrate the principal functions of the TANDAM system concept in relation to the crew, the aircraft, and ATC. A plan for the evaluation of the TANDAM system was established. The guidelines were developed based on reviews of relevant literature, and on experience gained in the design effort.

  10. Design Automation Using Script Languages. High-Level CAD Templates in Non-Parametric Programs

    Science.gov (United States)

    Moreno, R.; Bazán, A. M.

    2017-10-01

    The main purpose of this work is to study the advantages offered by the application of traditional techniques of technical drawing in processes for automation of the design, with non-parametric CAD programs, provided with scripting languages. Given that an example drawing can be solved with traditional step-by-step detailed procedures, is possible to do the same with CAD applications and to generalize it later, incorporating references. In today’s modern CAD applications, there are striking absences of solutions for building engineering: oblique projections (military and cavalier), 3D modelling of complex stairs, roofs, furniture, and so on. The use of geometric references (using variables in script languages) and their incorporation into high-level CAD templates allows the automation of processes. Instead of repeatedly creating similar designs or modifying their data, users should be able to use these templates to generate future variations of the same design. This paper presents the automation process of several complex drawing examples based on CAD script files aided with parametric geometry calculation tools. The proposed method allows us to solve complex geometry designs not currently incorporated in the current CAD applications and to subsequently create other new derivatives without user intervention. Automation in the generation of complex designs not only saves time but also increases the quality of the presentations and reduces the possibility of human errors.

  11. Automated reasoning applications to design validation and sneak function analysis

    International Nuclear Information System (INIS)

    Stratton, R.C.

    1984-01-01

    Argonne National Laboratory (ANL) is actively involved in the LMFBR Man-Machine Integration (MMI) Safety Program. The objective of this program is to enhance the operational safety and reliability of fast-breeder reactors by optimum integration of men and machines through the application of human factors principles and control engineering to the design, operation, and the control environment. ANL is developing methods to apply automated reasoning and computerization in the validation and sneak function analysis process. This project provides the element definitions and relations necessary for an automated reasoner (AR) to reason about design validation and sneak function analysis. This project also provides a demonstration of this AR application on an Experimental Breeder Reactor-II (EBR-II) system, the Argonne Cooling System

  12. Automated Design of Propellant-Optimal, End-to-End, Low-Thrust Trajectories for Trojan Asteroid Tours

    Science.gov (United States)

    Stuart, Jeffrey; Howell, Kathleen; Wilson, Roby

    2013-01-01

    The Sun-Jupiter Trojan asteroids are celestial bodies of great scientific interest as well as potential resources offering water and other mineral resources for longterm human exploration of the solar system. Previous investigations under this project have addressed the automated design of tours within the asteroid swarm. This investigation expands the current automation scheme by incorporating options for a complete trajectory design approach to the Trojan asteroids. Computational aspects of the design procedure are automated such that end-to-end trajectories are generated with a minimum of human interaction after key elements and constraints associated with a proposed mission concept are specified.

  13. Automation of fusion first wall design using artificial intelligence technique

    International Nuclear Information System (INIS)

    Yoshimura, Shinobu; Yagawa, Genki; Mochizuki, Yoshihiko

    1990-01-01

    This paper describes the application of artificial intelligence techniques to a design automation of the fusion first wall to be operated in the complex environment where huge electromagnetic and thermal loading as well as heavy neutron irradiation occur. As a basic strategy of designing structure shape considering many coupled phenomena, an ordinary design procedure based on the generate and test strategy is adopted because of its simplicity and broad applicability. To automate the design procedure with maintaining its flexibility, extensibility and efficiency, artificial intelligence techniques are utilized in the following. An object-oriented knowledge representation technique is adopted to store knowledge modules, that is, objects, related to the first wall design, while a data-flow processing technique is utilized as an inference mechanism among the knowledge modules. These techniques realize the flexibility and extensibility of the system. Moreover, as an efficient design modification mechanism, which is essential in a design process, an empirical approach based on experts' empirical knowledge and a mathematical approach based on a kind of numerical sensitivity analysis are introduced. The developed system is applied to a simple example of the design of a two-dimensional model of the first wall with a cooling channel, and its fundamental performance is clearly demonstrated. (author)

  14. A Review Of Design And Control Of Automated Guided Vehicle Systems

    NARCIS (Netherlands)

    T. Le-Anh (Tuan); M.B.M. de Koster (René)

    2004-01-01

    textabstractThis paper presents a review on design and control of automated guided vehicle systems. We address most key related issues including guide-path design, estimating the number of vehicles, vehicle scheduling, idle-vehicle positioning, battery management, vehicle routing, and conflict

  15. Geometry Based Design Automation : Applied to Aircraft Modelling and Optimization

    OpenAIRE

    Amadori, Kristian

    2012-01-01

    Product development processes are continuously challenged by demands for increased efficiency. As engineering products become more and more complex, efficient tools and methods for integrated and automated design are needed throughout the development process. Multidisciplinary Design Optimization (MDO) is one promising technique that has the potential to drastically improve concurrent design. MDO frameworks combine several disciplinary models with the aim of gaining a holistic perspective of ...

  16. Implementation of a VLSI Level Zero Processing system utilizing the functional component approach

    Science.gov (United States)

    Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.

    1991-01-01

    A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.

  17. Analog integrated circuit design automation placement, routing and parasitic extraction techniques

    CERN Document Server

    Martins, Ricardo; Horta, Nuno

    2017-01-01

    This book introduces readers to a variety of tools for analog layout design automation. After discussing the placement and routing problem in electronic design automation (EDA), the authors overview a variety of automatic layout generation tools, as well as the most recent advances in analog layout-aware circuit sizing. The discussion includes different methods for automatic placement (a template-based Placer and an optimization-based Placer), a fully-automatic Router and an empirical-based Parasitic Extractor. The concepts and algorithms of all the modules are thoroughly described, enabling readers to reproduce the methodologies, improve the quality of their designs, or use them as starting point for a new tool. All the methods described are applied to practical examples for a 130nm design process, as well as placement and routing benchmark sets. Introduces readers to hierarchical combination of Pareto fronts of placements; Presents electromigration-aware routing with multilayer multiport terminal structures...

  18. A utilization of fuzzy control for design automation of nuclear structures

    International Nuclear Information System (INIS)

    Yoshimura, Shinobu; Yagawa, Genki; Mochizuki, Yoshihiko

    1991-01-01

    This paper describes an automated design of nuclear structures by means of some artificial intelligence techniques. The 'generate and test' strategy is adopted as a basic strategy of design. An empirical approach with the fuzzy control is introduced for efficient design modification. This system is applied to the design of some 2D models of the fusion first wall. (author)

  19. Application of the H-Mode, a Design and Interaction Concept for Highly Automated Vehicles, to Aircraft

    Science.gov (United States)

    Goodrich, Kenneth H.; Flemisch, Frank O.; Schutte, Paul C.; Williams, Ralph A.

    2006-01-01

    Driven by increased safety, efficiency, and airspace capacity, automation is playing an increasing role in aircraft operations. As aircraft become increasingly able to autonomously respond to a range of situations with performance surpassing human operators, we are compelled to look for new methods that help us understand their use and guide their design using new forms of automation and interaction. We propose a novel design metaphor to aid the conceptualization, design, and operation of highly-automated aircraft. Design metaphors transfer meaning from common experiences to less familiar applications or functions. A notable example is the "Desktop metaphor" for manipulating files on a computer. This paper describes a metaphor for highly automated vehicles known as the H-metaphor and a specific embodiment of the metaphor known as the H-mode as applied to aircraft. The fundamentals of the H-metaphor are reviewed followed by an overview of an exploratory usability study investigating human-automation interaction issues for a simple H-mode implementation. The envisioned application of the H-mode concept to aircraft is then described as are two planned evaluations.

  20. Designing an automated blood fractionation system.

    Science.gov (United States)

    McQuillan, Adrian C; Sales, Sean D

    2008-04-01

    UK Biobank will be collecting blood samples from a cohort of 500 000 volunteers and it is expected that the rate of collection will peak at approximately 3000 blood collection tubes per day. These samples need to be prepared for long-term storage. It is not considered practical to manually process this quantity of samples so an automated blood fractionation system is required. Principles of industrial automation were applied to the blood fractionation process leading to the requirement of developing a vision system to identify the blood fractions within the blood collection tube so that the fractions can be accurately aspirated and dispensed into micro-tubes. A prototype was manufactured and tested on a range of human blood samples collected in different tube types. A specially designed vision system was capable of accurately measuring the position of the plasma meniscus, plasma/buffy coat interface and the red cells/buffy coat interface within a vacutainer. A rack of 24 vacutainers could be processed in blood fractionation system offers a solution to the problem of processing human blood samples collected in vacutainers in a consistent manner and provides a means of ensuring data and sample integrity.

  1. Automated design of genomic Southern blot probes

    Directory of Open Access Journals (Sweden)

    Komiyama Noboru H

    2010-01-01

    experimentally validate a number of these automated designs by Southern blotting. The majority of probes we tested performed well confirming our in silico prediction methodology and the general usefulness of the software for automated genomic Southern probe design. Conclusions Software and supplementary information are freely available at: http://www.genes2cognition.org/software/southern_blot

  2. The Automation Control System Design of Walking Beam Heating Furnace

    Directory of Open Access Journals (Sweden)

    Hong-Yu LIU

    2014-10-01

    Full Text Available Combining the transformation project of certain strip steel rolling production line, the techniques process of walking beam heating furnace was elaborated in this paper. The practical application of LOS-T18-2ZC1 laser detector was elaborated. The network communication model of walking beam heating furnace control system was designed. The realization method of production process automation control was elaborated. The entire automation control system allocation picture and PLC power distribution system picture of walking beam heating furnace were designed. Charge machine movement process was elaborated. Walking beam movement process was elaborated. Extractor movement process was elaborated. The hydraulic station of walking mechanism was elaborated. Relative control circuit diagram was designed. The control function of parallel shift motor, uplifted and degressive motor was elaborated. The control circuit diagram of parallel shift motor of charge machine and extractor of first heating furnace was designed. The control circuit diagram of uplifted and degressive motor of charge machine and extractor of first heating furnace was designed. The realization method of steel blank length test function was elaborated. The realization method of tracking and sequence control function of heating furnace field roller were elaborated. The design provides important reference base for enhancing walking beam heating furnace control level.

  3. Three-dimensional design methodologies for tree-based FPGA architecture

    CERN Document Server

    Pangracious, Vinod; Mehrez, Habib

    2015-01-01

    This book focuses on the development of 3D design and implementation methodologies for Tree-based FPGA architecture. It also stresses the needs for new and augmented 3D CAD tools to support designs such as, the design for 3D, to manufacture high performance 3D integrated circuits and reconfigurable FPGA-based systems. This book was written as a text that covers the foundations of 3D integrated system design and FPGA architecture design. It was written for the use in an elective or core course at the graduate level in field of Electrical Engineering, Computer Engineering and Doctoral Research programs. No previous background on 3D integration is required, nevertheless fundamental understanding of 2D CMOS VLSI design is required. It is assumed that reader has taken the core curriculum in Electrical Engineering or Computer Engineering, with courses like CMOS VLSI design, Digital System Design and Microelectronics Circuits being the most important. It is accessible for self-study by both senior students and profe...

  4. Development of an integrated circuit VLSI used for time measurement and selective read out in the front end electronics of the DIRC for the Babar experience at SLAC; Developpement d'un circuit integre VLSI assurant mesure de temps et lecture selective dans l'electronique frontale du compteur DIRC de l'experience babar a slac

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1999-07-01

    This thesis deals with the design the development and the tests of an integrated circuit VLSI, supplying selective read and time measure for 16 channels. This circuit has been developed for a experiment of particles physics, BABAR, that will take place at SLAC (Stanford Linear Accelerator Center). A first part describes the physical stakes of the experiment, the electronic architecture and the place of the developed circuit in the research program. The second part presents the technical drawings of the circuit, the prototypes leading to the final design and the validity tests. (A.L.B.)

  5. An Automated Design Approach for High-Lift Systems incorporating Eccentric Beam Actuators

    NARCIS (Netherlands)

    Steenhuizen, D.; Van Tooren, M.J.L.

    2010-01-01

    In order to asess the merit of novel high-lift structural concepts to the design of contemporary and future transport aircraft, a highly automated design routine is elaborated. The structure, purpose and evolution of this design routine is set-out with the use of Knowledge-Based Engineering

  6. Design and implementation of interface units for high speed fiber optics local area networks and broadband integrated services digital networks

    Science.gov (United States)

    Tobagi, Fouad A.; Dalgic, Ismail; Pang, Joseph

    1990-01-01

    The design and implementation of interface units for high speed Fiber Optic Local Area Networks and Broadband Integrated Services Digital Networks are discussed. During the last years, a number of network adapters that are designed to support high speed communications have emerged. This approach to the design of a high speed network interface unit was to implement package processing functions in hardware, using VLSI technology. The VLSI hardware implementation of a buffer management unit, which is required in such architectures, is described.

  7. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  8. Automated firewall analytics design, configuration and optimization

    CERN Document Server

    Al-Shaer, Ehab

    2014-01-01

    This book provides a comprehensive and in-depth study of automated firewall policy analysis for designing, configuring and managing distributed firewalls in large-scale enterpriser networks. It presents methodologies, techniques and tools for researchers as well as professionals to understand the challenges and improve the state-of-the-art of managing firewalls systematically in both research and application domains. Chapters explore set-theory, managing firewall configuration globally and consistently, access control list with encryption, and authentication such as IPSec policies. The author

  9. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    Science.gov (United States)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  10. MIDAS: Automated Approach to Design Microwave Integrated Inductors and Transformers on Silicon

    Directory of Open Access Journals (Sweden)

    L. Aluigi

    2013-09-01

    Full Text Available The design of modern radiofrequency integrated circuits on silicon operating at microwave and millimeter-waves requires the integration of several spiral inductors and transformers that are not commonly available in the process design-kits of the technologies. In this work we present an auxiliary CAD tool for Microwave Inductor (and transformer Design Automation on Silicon (MIDAS that exploits commercial simulators and allows the implementation of an automatic design flow, including three-dimensional layout editing and electromagnetic simulations. In detail, MIDAS allows the designer to derive a preliminary sizing of the inductor (transformer on the bases of the design entries (specifications. It draws the inductor (transformer layers for the specific process design kit, including vias and underpasses, with or without patterned ground shield, and launches the electromagnetic simulations, achieving effective design automation with respect to the traditional design flow for RFICs. With the present software suite the complete design time is reduced significantly (typically 1 hour on a PC based on Intel® Pentium® Dual 1.80GHz CPU with 2-GB RAM. Afterwards both the device equivalent circuit and the layout are ready to be imported in the Cadence environment.

  11. Preliminary design of a production automation framework for a pyroprocessing facility

    Directory of Open Access Journals (Sweden)

    Moonsoo Shin

    2018-04-01

    Full Text Available Pyroprocessing technology has been regarded as a promising solution for recycling spent fuel in nuclear power plants. The Korea Atomic Energy Research Institute has been studying the current status of equipment and facilities for pyroprocessing and found that existing facilities are manually operated; therefore, their applications have been limited to laboratory scale because of low productivity and safety concerns. To extend the pyroprocessing technology to a commercial scale, the facility, including all the processing equipment and the material-handling devices, should be enhanced in view of automation. In an automated pyroprocessing facility, a supervised control system is needed to handle and manage material flow and associated operations. This article provides a preliminary design of the supervising system for pyroprocessing. In particular, a manufacturing execution system intended for an automated pyroprocessing facility, named Pyroprocessing Execution System, is proposed, by which the overall production process is automated via systematic collaboration with a planning system and a control system. Moreover, a simulation-based prototype system is presented to illustrate the operability of the proposed Pyroprocessing Execution System, and a simulation study to demonstrate the interoperability of the material-handling equipment with processing equipment is also provided. Keywords: Manufacturing Execution System, Material-handling, Production Automation, Production Planning and Control, Pyroprocessing, Pyroprocessing Execution System

  12. Advanced field-solver techniques for RC extraction of integrated circuits

    CERN Document Server

    Yu, Wenjian

    2014-01-01

    Resistance and capacitance (RC) extraction is an essential step in modeling the interconnection wires and substrate coupling effect in nanometer-technology integrated circuits (IC). The field-solver techniques for RC extraction guarantee the accuracy of modeling, and are becoming increasingly important in meeting the demand for accurate modeling and simulation of VLSI designs. Advanced Field-Solver Techniques for RC Extraction of Integrated Circuits presents a systematic introduction to, and treatment of, the key field-solver methods for RC extraction of VLSI interconnects and substrate coupling in mixed-signal ICs. Various field-solver techniques are explained in detail, with real-world examples to illustrate the advantages and disadvantages of each algorithm. This book will benefit graduate students and researchers in the field of electrical and computer engineering, as well as engineers working in the IC design and design automation industries. Dr. Wenjian Yu is an Associate Professor at the Department of ...

  13. Radiation hardness tests with a demonstrator preamplifier circuit manufactured in silicon on sapphire (SOS) VLSI technology

    International Nuclear Information System (INIS)

    Bingefors, N.; Ekeloef, T.; Eriksson, C.; Paulsson, M.; Moerk, G.; Sjoelund, A.

    1992-01-01

    Samples of the preamplifier circuit, as well as of separate n and p channel transistors of the type contained in the circuit, were irradiated with gammas from a 60 Co source up to an integrated dose of 3 Mrad (30 kGy). The VLSI manufacturing technology used is the SOS4 process of ABB Hafo. A first analysis of the tests shows that the performance of the amplifier remains practically unaffected by the radiation for total doses up to 1 Mrad. At higher doses up to 3 Mrad the circuit amplification factor decreases by a factor between 4 and 5 whereas the output noise level remains unchanged. It is argued that it may be possible to reduce the decrease in amplification factor in future by optimizing the amplifier circuit design further. (orig.)

  14. Automating analog design: Taming the shrew

    Science.gov (United States)

    Barlow, A.

    1990-01-01

    The pace of progress in the design of integrated circuits continues to amaze observers inside and outside of the industry. Three decades ago, a 50 transistor chip was a technological wonder. Fifteen year later, a 5000 transistor device would 'wow' the crowds. Today, 50,000 transistor chips will earn a 'not too bad' assessment, but it takes 500,000 to really leave an impression. In 1975 a typical ASIC device had 1000 transistors, took one year to first samples (and two years to production) and sold for about 5 cents per transistor. Today's 50,000 transistor gate array takes about 4 months from spec to silicon, works the first time, and sells for about 0.02 cents per transistor. Fifteen years ago, the single most laborious and error prone step in IC design was the physical layout. Today, most IC's never see the hand of a layout designer: and automatic place and route tool converts the engineer's computer captured schematic to a complete physical design using a gate array or a library of standard cells also created by software rather than by designers. CAD has also been a generous benefactor to the digital design process. The architect of today's digital systems creates the design using an RTL or other high level simulator. Then the designer pushes a button to invoke the logic synthesizer-optimizer tool. A fault analyzer checks the result for testability and suggests where scan based cells will improve test coverage. One obstinate holdout amidst this parade of progress is the automation of analog design and its reduction to semi-custom techniques. This paper investigates the application of CAD techniques to analog design.

  15. Design and analysis on sorting blade for automated size-based sorting device

    Science.gov (United States)

    Razali, Zol Bahri; Kader, Mohamed Mydin M. Abdul; Samsudin, Yasser Suhaimi; Daud, Mohd Hisam

    2017-09-01

    Nowadays rubbish separating or recycling is a main problem of nation, where peoples dumped their rubbish into dumpsite without caring the value of the rubbish if it can be recycled and reused. Thus the author proposed an automated segregating device, purposely to teach people to separate their rubbish and value the rubbish that can be reused. The automated size-based mechanical segregating device provides significant improvements in terms of efficiency and consistency in this segregating process. This device is designed to make recycling easier, user friendly, in the hope that more people will take responsibility if it is less of an expense of time and effort. This paper discussed about redesign a blade for the sorting device which is to develop an efficient automated mechanical sorting device for the similar material but in different size. The machine is able to identify the size of waste and it depends to the coil inside the container to separate it out. The detail design and methodology is described in detail in this paper.

  16. Designing Automated Guidance to Promote Productive Revision of Science Explanations

    Science.gov (United States)

    Tansomboon, Charissa; Gerard, Libby F.; Vitale, Jonathan M.; Linn, Marcia C.

    2017-01-01

    Supporting students to revise their written explanations in science can help students to integrate disparate ideas and develop a coherent, generative account of complex scientific topics. Using natural language processing to analyze student written work, we compare forms of automated guidance designed to motivate productive revision and help…

  17. Surrogate-Assisted Genetic Programming With Simplified Models for Automated Design of Dispatching Rules.

    Science.gov (United States)

    Nguyen, Su; Zhang, Mengjie; Tan, Kay Chen

    2017-09-01

    Automated design of dispatching rules for production systems has been an interesting research topic over the last several years. Machine learning, especially genetic programming (GP), has been a powerful approach to dealing with this design problem. However, intensive computational requirements, accuracy and interpretability are still its limitations. This paper aims at developing a new surrogate assisted GP to help improving the quality of the evolved rules without significant computational costs. The experiments have verified the effectiveness and efficiency of the proposed algorithms as compared to those in the literature. Furthermore, new simplification and visualisation approaches have also been developed to improve the interpretability of the evolved rules. These approaches have shown great potentials and proved to be a critical part of the automated design system.

  18. Design of Air Conditioning Automation for Patisserie Shopwindow

    OpenAIRE

    Kemal Tutuncu; Recai Ozcan

    2013-01-01

    Having done in this study, air-conditioning automation for patisserie shopwindow was designed. In the cooling sector it is quite important to cooling up the air temperature in the shopwindow within short time interval. Otherwise the patisseries inside of the shopwindow will be spoilt in a few days. Additionally the humidity is other important parameter for the patisseries kept in shopwindow. It must be raised up to desired level in a quite short time. Traditional patisser...

  19. Automation Interface Design Development

    Data.gov (United States)

    National Aeronautics and Space Administration — Our research makes its contributions at two levels. At one level, we addressed the problems of interaction between humans and computers/automation in a particular...

  20. Evidence Report, Risk of Inadequate Design of Human and Automation/Robotic Integration

    Science.gov (United States)

    Zumbado, Jennifer Rochlis; Billman, Dorrit; Feary, Mike; Green, Collin

    2011-01-01

    The success of future exploration missions depends, even more than today, on effective integration of humans and technology (automation and robotics). This will not emerge by chance, but by design. Both crew and ground personnel will need to do more demanding tasks in more difficult conditions, amplifying the costs of poor design and the benefits of good design. This report has looked at the importance of good design and the risks from poor design from several perspectives: 1) If the relevant functions needed for a mission are not identified, then designs of technology and its use by humans are unlikely to be effective: critical functions will be missing and irrelevant functions will mislead or drain attention. 2) If functions are not distributed effectively among the (multiple) participating humans and automation/robotic systems, later design choices can do little to repair this: additional unnecessary coordination work may be introduced, workload may be redistributed to create problems, limited human attentional resources may be wasted, and the capabilities of both humans and technology underused. 3) If the design does not promote accurate understanding of the capabilities of the technology, the operators will not use the technology effectively: the system may be switched off in conditions where it would be effective, or used for tasks or in contexts where its effectiveness may be very limited. 4) If an ineffective interaction design is implemented and put into use, a wide range of problems can ensue. Many involve lack of transparency into the system: operators may be unable or find it very difficult to determine a) the current state and changes of state of the automation or robot, b) the current state and changes in state of the system being controlled or acted on, and c) what actions by human or by system had what effects. 5) If the human interfaces for operation and control of robotic agents are not designed to accommodate the unique points of view and

  1. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    Science.gov (United States)

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  2. Ambient Temperature Based Thermal Aware Energy Efficient ROM Design on FPGA

    DEFF Research Database (Denmark)

    Saini, Rishita; Bansal, Neha; Bansal, Meenakshi

    2015-01-01

    Thermal aware design is currently gaining importance in VLSI research domain. In this work, we are going to design thermal aware energy efficient ROM on Virtex-5 FPGA. Ambient Temperature, airflow, and heat sink profile play a significant role in thermal aware hardware design life cycle. Ambient...

  3. Operation of a Fast-RICH Prototype with VLSI readout electronics

    Energy Technology Data Exchange (ETDEWEB)

    Guyonnet, J.L. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Arnold, R. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Jobez, J.P. (Coll. de France, 75 - Paris (France)); Seguinot, J. (Coll. de France, 75 - Paris (France)); Ypsilantis, T. (Coll. de France, 75 - Paris (France)); Chesi, E. (CERN / ECP Div., Geneve (Switzerland)); Racz, A. (CERN / ECP Div., Geneve (Switzerland)); Egger, J. (Paul Scherrer Inst., Villigen (Switzerland)); Gabathuler, K. (Paul Scherrer Inst., Villigen (Switzerland)); Joram, C. (Karlsruhe Univ. (Germany)); Adachi, I. (KEK, Tsukuba (Japan)); Enomoto, R. (KEK, Tsukuba (Japan)); Sumiyoshi, T. (KEK, Tsukuba (Japan))

    1994-04-01

    We discuss the first test results, obtained with cosmic rays, of a full-scale Fast-RICH Prototype with proximity-focused 10 mm thick LiF (CaF[sub 2]) solid radiators, TEA as photosensor in CH[sub 4], and readout of 12 x 10[sup 3] cathode pads (5.334 x 6.604 mm[sup 2]) using dedicated VLSI electronics we have developed. The number of detected photoelectrons is 7.7 (6.9) for the CaF[sub 2] (LiF) radiator, very near to the expected values 6.4 (7.5) from Monte Carlo simulations. The single-photon Cherenkov angle resolution [sigma][sub [theta

  4. Design automation for integrated nonlinear logic circuits (Conference Presentation)

    Science.gov (United States)

    Van Vaerenbergh, Thomas; Pelc, Jason; Santori, Charles; Bose, Ranojoy; Kielpinski, Dave; Beausoleil, Raymond G.

    2016-05-01

    A key enabler of the IT revolution of the late 20th century was the development of electronic design automation (EDA) tools allowing engineers to manage the complexity of electronic circuits with transistor counts now reaching into the billions. Recently, we have been developing large-scale nonlinear photonic integrated logic circuits for next generation all-optical information processing. At this time a sufficiently powerful EDA-style software tool chain to design this type of complex circuits does not yet exist. Here we describe a hierarchical approach to automating the design and validation of photonic integrated circuits, which can scale to several orders of magnitude higher complexity than the state of the art. Most photonic integrated circuits developed today consist of a small number of components, and only limited hierarchy. For example, a simple photonic transceiver may contain on the order of 10 building-block components, consisting of grating couplers for photonic I/O, modulators, and signal splitters/combiners. Because this is relatively easy to lay out by hand (or simple script) existing photonic design tools have relatively little automation in comparison to electronics tools. But demonstrating all-optical logic will require significantly more complex photonic circuits containing up to 1,000 components, hence becoming infeasible to design manually. Our design framework is based off Python-based software from Luceda Photonics which provides an environment to describe components, simulate their behavior, and export design files (GDS) to foundries for fabrication. At a fundamental level, a photonic component is described as a parametric cell (PCell) similarly to electronics design. PCells are described by geometric characteristics of their layout. A critical part of the design framework is the implementation of PCells as Python objects. PCell objects can then use inheritance to simplify design, and hierarchical designs can be made by creating composite

  5. Design of a robotic automation system for transportation of goods in hospitals

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan; Sørensen, Torben; Fan, Zhun

    2007-01-01

    Hospitals face with heavy traffic of goods everyday, where transportation tasks are mainly carried by human. Analysis of the current situation of transportation in a typical hospital showed several transportation tasks are suitable for automation. This paper presents a system, consisting of a fleet...... of robot vehicles, automatic stations and smart containers for automation of transportation of goods in hospitals. Design of semi-autonomous robot vehicles, containers and stations are presented and the overall system architecture is described. Implementing such a system in an existing hospital showed...

  6. Design of delay insensitive circuits using multi-ring structures

    DEFF Research Database (Denmark)

    Sparsø, Jens; Staunstrup, Jørgen; Dantzer-Sørensen, Michael

    1992-01-01

    The design and VLSI implementation of a delay insensitive circuit that computes the inner product of two vec·tors is described. The circuit is based on an iterative serial-parallel multiplication algorithm. The design is based on a data flow approach using pipelines and rings that are combined...

  7. A Case Study of Reverse Engineering Integrated in an Automated Design Process

    Science.gov (United States)

    Pescaru, R.; Kyratsis, P.; Oancea, G.

    2016-11-01

    This paper presents a design methodology which automates the generation of curves extracted from the point clouds that have been obtained by digitizing the physical objects. The methodology is described on a product belonging to the industry of consumables, respectively a footwear type product that has a complex shape with many curves. The final result is the automated generation of wrapping curves, surfaces and solids according to the characteristics of the customer's foot, and to the preferences for the chosen model, which leads to the development of customized products.

  8. Flip-flop design in nanometer CMOS from high speed to low energy

    CERN Document Server

    Alioto, Massimo; Palumbo, Gaetano

    2015-01-01

    This book provides a unified treatment of Flip-Flop design and selection in nanometer CMOS VLSI systems. The design aspects related to the energy-delay tradeoff in Flip-Flops are discussed, including their energy-optimal selection according to the targeted application, and the detailed circuit design in nanometer CMOS VLSI systems. Design strategies are derived in a coherent framework that includes explicitly nanometer effects, including leakage, layout parasitics and process/voltage/temperature variations, as main advances over the existing body of work in the field. The related design tradeoffs are explored in a wide range of applications and the related energy-performance targets. A wide range of existing and recently proposed Flip-Flop topologies are discussed. Theoretical foundations are provided to set the stage for the derivation of design guidelines, and emphasis is given on practical aspects and consequences of the presented results. Analytical models and derivations are introduced when needed to gai...

  9. Automated design evolution of stereochemically randomized protein foldamers

    Science.gov (United States)

    Ranbhor, Ranjit; Kumar, Anil; Patel, Kirti; Ramakrishnan, Vibin; Durani, Susheel

    2018-05-01

    Diversification of chain stereochemistry opens up the possibilities of an ‘in principle’ increase in the design space of proteins. This huge increase in the sequence and consequent structural variation is aimed at the generation of smart materials. To diversify protein structure stereochemically, we introduced L- and D-α-amino acids as the design alphabet. With a sequence design algorithm, we explored the usage of specific variables such as chirality and the sequence of this alphabet in independent steps. With molecular dynamics, we folded stereochemically diverse homopolypeptides and evaluated their ‘fitness’ for possible design as protein-like foldamers. We propose a fitness function to prune the most optimal fold among 1000 structures simulated with an automated repetitive simulated annealing molecular dynamics (AR-SAMD) approach. The highly scored poly-leucine fold with sequence lengths of 24 and 30 amino acids were later sequence-optimized using a Dead End Elimination cum Monte Carlo based optimization tool. This paper demonstrates a novel approach for the de novo design of protein-like foldamers.

  10. An automated supernova search and the design strategy

    International Nuclear Information System (INIS)

    Colgate, S.A.

    1987-01-01

    The design considerations for an automated supernova search are reviewed. If supernova are to be found a week after explosion well before light maximum of both Types I and II, and if a rate of finding of 52 per year is justified, then one needs to keep roughly 5000 galaxies under surveillance out of a full set of 15,000 galaxies at ≅50 Mpc distance. For detection at 1% of Type I maximum light requires a 30-inch telescope, 10 photoelectrons per pixel threshold, a 128 x 128 pixel photodetector operating with a 3-second integration time, and 2 seconds to slew and settle ≅1 0 . A system designed to perform this function in real-time is described. 7 refs

  11. Vlsi implementation of flexible architecture for decision tree classification in data mining

    Science.gov (United States)

    Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak

    2017-07-01

    The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.

  12. Low cost automation

    International Nuclear Information System (INIS)

    1987-03-01

    This book indicates method of building of automation plan, design of automation facilities, automation and CHIP process like basics of cutting, NC processing machine and CHIP handling, automation unit, such as drilling unit, tapping unit, boring unit, milling unit and slide unit, application of oil pressure on characteristics and basic oil pressure circuit, application of pneumatic, automation kinds and application of process, assembly, transportation, automatic machine and factory automation.

  13. How the new optoelectronic design automation industry is taking advantage of preexisting EDA standards

    Science.gov (United States)

    Nesmith, Kevin A.; Carver, Susan

    2014-05-01

    With the advancements in design processes down to the sub 7nm levels, the Electronic Design Automation industry appears to be coming to an end of advancements, as the size of the silicon atom becomes the limiting factor. Or is it? The commercial viability of mass-producing silicon photonics is bringing about the Optoelectronic Design Automation (OEDA) industry. With the science of photonics in its infancy, adding these circuits to ever-increasing complex electronic designs, will allow for new generations of advancements. Learning from the past 50 years of the EDA industry's mistakes and missed opportunities, the photonics industry is starting with electronic standards and extending them to become photonically aware. Adapting the use of pre-existing standards into this relatively new industry will allow for easier integration into the present infrastructure and faster time to market.

  14. Home Automation

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this paper I briefly discuss the importance of home automation system. Going in to the details I briefly present a real time designed and implemented software and hardware oriented house automation research project, capable of automating house's electricity and providing a security system to detect the presence of unexpected behavior.

  15. Designing an Automated Assessment of Public Speaking Skills Using Multimodal Cues

    Science.gov (United States)

    Chen, Lei; Feng, Gary; Leong, Chee Wee; Joe, Jilliam; Kitchen, Christopher; Lee, Chong Min

    2016-01-01

    Traditional assessments of public speaking skills rely on human scoring. We report an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. Task design, rubric development, and human rating were conducted according to standards in educational assessment. An initial corpus of…

  16. Global floor planning approach for VLSI design

    International Nuclear Information System (INIS)

    LaPotin, D.P.

    1986-01-01

    Within a hierarchical design environment, initial decisions regarding the partitioning and choice of module attributes greatly impact the quality of the resulting IC in terms of area and electrical performance. This dissertation presents a global floor-planning approach which allows designers to quickly explore layout issues during the initial stages of the IC design process. In contrast to previous efforts, which address the floor-planning problem from a strict module placement point of view, this approach considers floor-planning from an area planning point of view. The approach is based upon a combined min-cut and slicing paradigm, which ensures routability. To provide flexibility, modules may be specified as having a number of possible dimensions and orientations, and I/O pads as well as layout constraints are considered. A slicing-tree representation is employed, upon which a sequence of traversal operations are applied in order to obtain an area efficient layout. An in-place partitioning technique, which provides an improvement over previous min-cut and slicing-based efforts, is discussed. Global routing and module I/O pin assignment are provided for floor-plan evaluation purposes. A computer program, called Mason, has been developed which efficiently implements the approach and provides an interactive environment for designers to perform floor-planning. Performance of this program is illustrated via several industrial examples

  17. Automation of Test Cases for Web Applications : Automation of CRM Test Cases

    OpenAIRE

    Seyoum, Alazar

    2012-01-01

    The main theme of this project was to design a test automation framework for automating web related test cases. Automating test cases designed for testing a web interface provide a means of improving a software development process by shortening the testing phase in the software development life cycle. In this project an existing AutoTester framework and iMacros test automation tools were used. CRM Test Agent was developed to integrate AutoTester to iMacros and to enable the AutoTester,...

  18. Productivity and data processing: two essentials for a dynamic company. Proceedings of the spring convention

    Energy Technology Data Exchange (ETDEWEB)

    1983-01-01

    The following topics were dealt with: office automation markets; future developments; software package market; telecommunication for big organizations; DP security; robotics; teletel; EDP management; role of VLSI; peripheral equipment; systems architecture; artificial intelligence; expert systems; warehouse automation; and microcomputer techniques.

  19. Design, Construction, Demonstration and Delivery of an Automated Narrow Gap Welding System.

    Science.gov (United States)

    1982-06-29

    DESIGN, CONSTRUCTION, DEMONSTRATION AND DELIVERY OF WE DA4I &NARROW GAP CONTRACT NO. NOOGOO-81-C-E923 TO DAVID TAYLOR NAVAL RESEARCH AND DEVELOPMENT...the automated * Narrow Gap welding process, is the narrow (3/8 - inch), square-butt joint *design. This narrow joint greatly reduces the volume of weld...AD-i45 495 DESIGN CONSTRUCTION DEMONSTRATION AiND DELIVERY OF RN 1/j AUrOMATED NARROW GAP WELDING SYSTEMI() CRC AUTOMATIC WELDING CO HOUSTON TX 29

  20. An architecture pattern for safety critical automated driving applications: Design and analysis

    NARCIS (Netherlands)

    Luo, Y.; Saberi, A.K.; Bijlsma, T.; Lukkien, J.J.; Brand, M. van den

    2017-01-01

    Introduction of automated driving increases complexity of automotive systems. As a result, architecture design becomes a major concern for ensuring non-functional requirements such as safety, and modifiability. In the ISO 26262 standard, architecture patterns are recommended for system development.

  1. An architecture pattern for safety critical automated driving applications : design and analysis

    NARCIS (Netherlands)

    Luo, Y.; Khabbaz Saberi, A.; Bijlsma, T.; Lukkien, J.J.; van den Brand, M.G.J.

    2017-01-01

    Introduction of automated driving increases complexity of automotive systems. As a result, architecture design becomes a major concern for ensuring non-functional requirements such as safety, and modifiability. In the ISO 26262 standard, architecture patterns are recommended for system development.

  2. Automated Design and Analysis Tool for CEV Structural and TPS Components, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed effort is a unique automated process for the analysis, design, and sizing of CEV structures and TPS. This developed process will...

  3. Ankle-Foot Orthosis Made by 3D Printing Technique and Automated Design Software

    Directory of Open Access Journals (Sweden)

    Yong Ho Cha

    2017-01-01

    Full Text Available We described 3D printing technique and automated design software and clinical results after the application of this AFO to a patient with a foot drop. After acquiring a 3D modelling file of a patient’s lower leg with peroneal neuropathy by a 3D scanner, we loaded this file on the automated orthosis software and created the “STL” file. The designed AFO was printed using a fused filament fabrication type 3D printer, and a mechanical stress test was performed. The patient alternated between the 3D-printed and conventional AFOs for 2 months. There was no crack or damage, and the shape and stiffness of the AFO did not change after the durability test. The gait speed increased after wearing the conventional AFO (56.5 cm/sec and 3D-printed AFO (56.5 cm/sec compared to that without an AFO (42.2 cm/sec. The patient was more satisfied with the 3D-printed AFO than the conventional AFO in terms of the weight and ease of use. The 3D-printed AFO exhibited similar functionality as the conventional AFO and considerably satisfied the patient in terms of the weight and ease of use. We suggest the possibility of the individualized AFO with 3D printing techniques and automated design software.

  4. A Systems Approach to Information Technology (IT) Infrastructure Design for Utility Management Automation Systems

    OpenAIRE

    A. Fereidunian; H. Lesani; C. Lucas; M. Lehtonen; M. M. Nordman

    2006-01-01

    Almost all of electric utility companies are planning to improve their management automation system, in order to meet the changing requirements of new liberalized energy market and to benefit from the innovations in information and communication technology (ICT or IT). Architectural design of the utility management automation (UMA) systems for their IT-enabling requires proper selection of IT choices for UMA system, which leads to multi-criteria decision-makings (MCDM). In resp...

  5. Ecological Automation Design, Extending Work Domain Analysis

    NARCIS (Netherlands)

    Amelink, M.H.J.

    2010-01-01

    In high–risk domains like aviation, medicine and nuclear power plant control, automation has enabled new capabilities, increased the economy of operation and has greatly contributed to safety. However, automation increases the number of couplings in a system, which can inadvertently lead to more

  6. Three Experiments Examining the Use of Electroencephalogram,Event-Related Potentials, and Heart-Rate Variability for Real-Time Human-Centered Adaptive Automation Design

    Science.gov (United States)

    Prinzel, Lawrence J., III; Parasuraman, Raja; Freeman, Frederick G.; Scerbo, Mark W.; Mikulka, Peter J.; Pope, Alan T.

    2003-01-01

    Adaptive automation represents an advanced form of human-centered automation design. The approach to automation provides for real-time and model-based assessments of human-automation interaction, determines whether the human has entered into a hazardous state of awareness and then modulates the task environment to keep the operator in-the-loop , while maintaining an optimal state of task engagement and mental alertness. Because adaptive automation has not matured, numerous challenges remain, including what the criteria are, for determining when adaptive aiding and adaptive function allocation should take place. Human factors experts in the area have suggested a number of measures including the use of psychophysiology. This NASA Technical Paper reports on three experiments that examined the psychophysiological measures of event-related potentials, electroencephalogram, and heart-rate variability for real-time adaptive automation. The results of the experiments confirm the efficacy of these measures for use in both a developmental and operational role for adaptive automation design. The implications of these results and future directions for psychophysiology and human-centered automation design are discussed.

  7. Circuits and filters handbook

    CERN Document Server

    Chen, Wai-Kai

    2003-01-01

    A bestseller in its first edition, The Circuits and Filters Handbook has been thoroughly updated to provide the most current, most comprehensive information available in both the classical and emerging fields of circuits and filters, both analog and digital. This edition contains 29 new chapters, with significant additions in the areas of computer-aided design, circuit simulation, VLSI circuits, design automation, and active and digital filters. It will undoubtedly take its place as the engineer's first choice in looking for solutions to problems encountered in the design, analysis, and behavi

  8. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  9. Initial beam test results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Adolphsen, C.; Litke, A.; Schwarz, A.

    1986-01-01

    Silicon detectors with 256 strips, having a pitch of 25 μm, and connected to two 128 channel NMOS VLSI chips each (Microplex), have been tested in relativistic charged particle beams at CERN and at the Stanford Linear Accelerator Center. The readout chips have an input channel pitch of 47.5 μm and a single multiplexed output which provides voltages proportional to the integrated charge from each strip. The most probable signal height from minimum ionizing tracks was 15 times the rms noise in any single channel. Two-track traversals with a separation of 100 μm were cleanly resolved

  10. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  11. DATA TRANSFER IN THE AUTOMATED SYSTEM OF PARALLEL DESIGN AND CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Volkov Andrey Anatol'evich

    2012-12-01

    Full Text Available This article covers data transfer processes in the automated system of parallel design and construction. The authors consider the structure of reports used by contractors and clients when large-scale projects are implemented. All necessary items of information are grouped into three levels, and each level is described by certain attributes. The authors drive a lot of attention to the integrated operational schedule as it is the main tool of project management. Some recommendations concerning the forms and the content of reports are presented. Integrated automation of all operations is a necessary condition for the successful implementation of the new concept. The technical aspect of the notion of parallel design and construction also includes the client-to-server infrastructure that brings together all process implemented by the parties involved into projects. This approach should be taken into consideration in the course of review of existing codes and standards to eliminate any inconsistency between the construction legislation and the practical experience of engineers involved into the process.

  12. Model-driven design using IEC 61499 a synchronous approach for embedded and automation systems

    CERN Document Server

    Yoong, Li Hsien; Bhatti, Zeeshan E; Kuo, Matthew M Y

    2015-01-01

    This book describes a novel approach for the design of embedded systems and industrial automation systems, using a unified model-driven approach that is applicable in both domains.  The authors illustrate their methodology, using the IEC 61499 standard as the main vehicle for specification, verification, static timing analysis and automated code synthesis.  The well-known synchronous approach is used as the main vehicle for defining an unambiguous semantics that ensures determinism and deadlock freedom. The proposed approach also ensures very efficient implementations either on small-scale embedded devices or on industry-scale programmable automation controllers (PACs). It can be used for both centralized and distributed implementations. Significantly, the proposed approach can be used without the need for any run-time support. This approach, for the first time, blurs the gap between embedded systems and automation systems and can be applied in wide-ranging applications in automotive, robotics, and industri...

  13. Integration of SPICE with TEK LV500 ASIC Design Verification System

    Directory of Open Access Journals (Sweden)

    A. Srivastava

    1996-01-01

    Full Text Available The present work involves integration of the simulation stage of design of a VLSI circuit and its testing stage. The SPICE simulator, TEK LV500 ASIC Design Verification System, and TekWaves, a test program generator for LV500, were integrated. A software interface in ‘C’ language in UNIX ‘solaris 1.x’ environment has been developed between SPICE and the testing tools (TekWAVES and LV500. The function of the software interface developed is multifold. It takes input from either SPICE2G.6 or SPICE 3e.1. The output generated by the interface software can be given as an input to either TekWAVES or LV500. A graphical user interface has also been developed with OPENWlNDOWS using Xview tool kit on SUN workstation. As an example, a two phase clock generator circuit has been considered and usefulness of the software demonstrated. The interface software could be easily linked with VLSI design such as MAGIC layout editor.

  14. Designing the next generation (fifth generation computers)

    International Nuclear Information System (INIS)

    Wallich, P.

    1983-01-01

    A description is given of the designs necessary to develop fifth generation computers. An analysis is offered of problems and developments in parallelism, VLSI, artificial intelligence, knowledge engineering and natural language processing. Software developments are outlined including logic programming, object-oriented programming and exploratory programming. Computer architecture is detailed including concurrent computer architecture

  15. Co-creative design developments for accessibility and home automation

    OpenAIRE

    Taib, SM; De Coster, R; Sabri Tekantape, E

    2017-01-01

    The term “Home Automation” can be referred to a networked home, which provides electronically controlled security and convenience for its users. Home automation is also defined as the integration of home-based technology and services for a better quality of living (Quynh, et al., 2012). The main purpose of home automation technologies is to enhance home comfort for everyone through the automation of higher security, domestic tasks and easy communication. Home automation should be able to enha...

  16. Automation in Warehouse Development

    CERN Document Server

    Verriet, Jacques

    2012-01-01

    The warehouses of the future will come in a variety of forms, but with a few common ingredients. Firstly, human operational handling of items in warehouses is increasingly being replaced by automated item handling. Extended warehouse automation counteracts the scarcity of human operators and supports the quality of picking processes. Secondly, the development of models to simulate and analyse warehouse designs and their components facilitates the challenging task of developing warehouses that take into account each customer’s individual requirements and logistic processes. Automation in Warehouse Development addresses both types of automation from the innovative perspective of applied science. In particular, it describes the outcomes of the Falcon project, a joint endeavour by a consortium of industrial and academic partners. The results include a model-based approach to automate warehouse control design, analysis models for warehouse design, concepts for robotic item handling and computer vision, and auton...

  17. CASTOR a VLSI CMOS mixed analog-digital circuit for low noise multichannel counting applications

    International Nuclear Information System (INIS)

    Comes, G.; Loddo, F.; Hu, Y.; Kaplon, J.; Ly, F.; Turchetta, R.; Bonvicini, V.; Vacchi, A.

    1996-01-01

    In this paper we present the design and first experimental results of a VLSI mixed analog-digital 1.2 microns CMOS circuit (CASTOR) for multichannel radiation detectors applications demanding low noise amplification and counting of radiation pulses. This circuit is meant to be connected to pixel-like detectors. Imaging can be obtained by counting the number of hits in each pixel during a user-controlled exposure time. Each channel of the circuit features an analog and a digital part. In the former one, a charge preamplifier is followed by a CR-RC shaper with an output buffer and a threshold discriminator. In the digital part, a 16-bit counter is present together with some control logic. The readout of the counters is done serially on a common tri-state output. Daisy-chaining is possible. A 4-channel prototype has been built. This prototype has been optimised for use in the digital radiography Syrmep experiment at the Elettra synchrotron machine in Trieste (Italy): its main design parameters are: shaping time of about 850 ns, gain of 190 mV/fC and ENC (e - rms)=60+17 C (pF). The counting rate per channel, limited by the analog part, can be as high as about 200 kHz. Characterisation of the circuit and first tests with silicon microstrip detectors are presented. They show the circuit works according to design specification and can be used for imaging applications. (orig.)

  18. Performance Evaluation of a Software Engineering Tool for Automated Design of Cooling Systems in Injection Moulding

    DEFF Research Database (Denmark)

    Jauregui-Becker, Juan M.; Tosello, Guido; van Houten, Fred J.A.M.

    2013-01-01

    This paper presents a software tool for automating the design of cooling systems for injection moulding and a validation of its performance. Cooling system designs were automatically generated by the proposed software tool and by applying a best practice tool engineering design approach. The two...

  19. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  20. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  1. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  2. Iterative User Interface Design for Automated Sequential Organ Failure Assessment Score Calculator in Sepsis Detection.

    Science.gov (United States)

    Aakre, Christopher Ansel; Kitson, Jaben E; Li, Man; Herasevich, Vitaly

    2017-05-18

    The new sepsis definition has increased the need for frequent sequential organ failure assessment (SOFA) score recalculation and the clerical burden of information retrieval makes this score ideal for automated calculation. The aim of this study was to (1) estimate the clerical workload of manual SOFA score calculation through a time-motion analysis and (2) describe a user-centered design process for an electronic medical record (EMR) integrated, automated SOFA score calculator with subsequent usability evaluation study. First, we performed a time-motion analysis by recording time-to-task-completion for the manual calculation of 35 baseline and 35 current SOFA scores by 14 internal medicine residents over a 2-month period. Next, we used an agile development process to create a user interface for a previously developed automated SOFA score calculator. The final user interface usability was evaluated by clinician end users with the Computer Systems Usability Questionnaire. The overall mean (standard deviation, SD) time-to-complete manual SOFA score calculation time was 61.6 s (33). Among the 24% (12/50) usability survey respondents, our user-centered user interface design process resulted in >75% favorability of survey items in the domains of system usability, information quality, and interface quality. Early stakeholder engagement in our agile design process resulted in a user interface for an automated SOFA score calculator that reduced clinician workload and met clinicians' needs at the point of care. Emerging interoperable platforms may facilitate dissemination of similarly useful clinical score calculators and decision support algorithms as "apps." A user-centered design process and usability evaluation should be considered during creation of these tools. ©Christopher Ansel Aakre, Jaben E Kitson, Man Li, Vitaly Herasevich. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 18.05.2017.

  3. Associative Memory Design for the Fast TracKer Processor (FTK)at ATLAS

    CERN Document Server

    Annovi, A; The ATLAS collaboration; Beretta, M; Bossini, E; Crescioli, F; Dell'Orso, M; Giannetti, P; Hoff, J; Liu, T; Liberali, V; Sacco, I; Schoening, A; Soltveit, H K; Stabile, A; Tripiccione, R

    2011-01-01

    We describe a VLSI processor for pattern recognition based on Content Addressable Memory (CAM) architecture, optimized for on-line track finding in high-energy physics experiments. A large CAM bank stores all trajectories of interest and extracts the ones compatible with a given event. This task is naturally parallelized by a CAM architecture able to output identified trajectories, recognized among a huge amount of possible combinations, in just a few 100 MHz clock cycles. We have developed this device (called the AMchip03 processor), using 180 nm technology, for the Silicon Vertex Trigger (SVT) upgrade at CDF [1] using a standard-cell VLSI design methodology. We propose a new design that introduces a full custom CAM cell and takes advantage of 65 nm technology. The customized design maximizes the pattern density, minimizes the power consumption and implements the functionalities needed for the planned Fast Tracker (FTK) [2], an ATLAS trigger upgrade project at LHC. We introduce a new variable resolution patt...

  4. Applying machine learning to pattern analysis for automated in-design layout optimization

    Science.gov (United States)

    Cain, Jason P.; Fakhry, Moutaz; Pathak, Piyush; Sweis, Jason; Gennari, Frank; Lai, Ya-Chieh

    2018-04-01

    Building on previous work for cataloging unique topological patterns in an integrated circuit physical design, a new process is defined in which a risk scoring methodology is used to rank patterns based on manufacturing risk. Patterns with high risk are then mapped to functionally equivalent patterns with lower risk. The higher risk patterns are then replaced in the design with their lower risk equivalents. The pattern selection and replacement is fully automated and suitable for use for full-chip designs. Results from 14nm product designs show that the approach can identify and replace risk patterns with quantifiable positive impact on the risk score distribution after replacement.

  5. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    Science.gov (United States)

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  6. Emergent auditory feature tuning in a real-time neuromorphic VLSI system

    Directory of Open Access Journals (Sweden)

    Sadique eSheik

    2012-02-01

    Full Text Available Many sounds of ecological importance, such as communication calls, are characterised by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamocortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP, which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectrotemporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step towards the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  7. Flow-Based Biochips: Fault-Tolerant Design and Error Recovery

    DEFF Research Database (Denmark)

    Pop, Paul

    2015-01-01

    VLSI). Biochips are currently being designed manually using tools such as AutoCAD. Physical defects can be introduced during the fabrication process, which reduces the yield, and may lead to the failure of the biochemical application. Failure is costly because of the need to redo lengthy experiments, using...

  8. Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues).

    Science.gov (United States)

    Dante, V; Del Giudice, P; Mattia, M

    2001-01-01

    We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.

  9. Lithography-based automation in the design of program defect masks

    Science.gov (United States)

    Vakanas, George P.; Munir, Saghir; Tejnil, Edita; Bald, Daniel J.; Nagpal, Rajesh

    2004-05-01

    In this work, we are reporting on a lithography-based methodology and automation in the design of Program Defect masks (PDM"s). Leading edge technology masks have ever-shrinking primary features and more pronounced model-based secondary features such as optical proximity corrections (OPC), sub-resolution assist features (SRAF"s) and phase-shifted mask (PSM) structures. In order to define defect disposition specifications for critical layers of a technology node, experience alone in deciding worst-case scenarios for the placement of program defects is necessary but may not be sufficient. MEEF calculations initiated from layout pattern data and their integration in a PDM layout flow provide a natural approach for improvements, relevance and accuracy in the placement of programmed defects. This methodology provides closed-loop feedback between layout and hard defect disposition specifications, thereby minimizing engineering test restarts, improving quality and reducing cost of high-end masks. Apart from SEMI and industry standards, best-known methods (BKM"s) in integrated lithographically-based layout methodologies and automation specific to PDM"s are scarce. The contribution of this paper lies in the implementation of Design-For-Test (DFT) principles to a synergistic interaction of CAD Layout and Aerial Image Simulator to drive layout improvements, highlight layout-to-fracture interactions and output accurate program defect placement coordinates to be used by tools in the mask shop.

  10. Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles

    Directory of Open Access Journals (Sweden)

    Hwisoo Eom

    2015-06-01

    Full Text Available A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model.

  11. Human-Automation Interaction Design for Adaptive Cruise Control Systems of Ground Vehicles.

    Science.gov (United States)

    Eom, Hwisoo; Lee, Sang Hun

    2015-06-12

    A majority of recently developed advanced vehicles have been equipped with various automated driver assistance systems, such as adaptive cruise control (ACC) and lane keeping assistance systems. ACC systems have several operational modes, and drivers can be unaware of the mode in which they are operating. Because mode confusion is a significant human error factor that contributes to traffic accidents, it is necessary to develop user interfaces for ACC systems that can reduce mode confusion. To meet this requirement, this paper presents a new human-automation interaction design methodology in which the compatibility of the machine and interface models is determined using the proposed criteria, and if the models are incompatible, one or both of the models is/are modified to make them compatible. To investigate the effectiveness of our methodology, we designed two new interfaces by separately modifying the machine model and the interface model and then performed driver-in-the-loop experiments. The results showed that modifying the machine model provides a more compact, acceptable, effective, and safe interface than modifying the interface model.

  12. DESIGN OF BUILDING AUTOMATION BASED ON PROFIBUS-DP NETWORK

    Directory of Open Access Journals (Sweden)

    Cemal YILMAZ

    2006-02-01

    Full Text Available In this study, a building automation has been designed by using the Profibus DP (Process Field Bus- Decentralized Periphery network. In the study; fire alarm, thief alarm, lighting, power, humidity and temperature control have been implemented. The data from building has been transmitted to the Profibus-DP network via control point located on the flats. The data taken from the building has been collected in the main control unit to achieve overall control of the system. The work has provided an optimum efficiency in energy consumption, control of power, security, temperature and humidity.

  13. PERFORMANCE OF LEAKAGE POWER MINIMIZATION TECHNIQUE FOR CMOS VLSI TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    T. Tharaneeswaran

    2012-06-01

    Full Text Available Leakage power of CMOS VLSI Technology is a great concern. To reduce leakage power in CMOS circuits, a Leakage Power Minimiza-tion Technique (LPMT is implemented in this paper. Leakage cur-rents are monitored and compared. The Comparator kicks the charge pump to give body voltage (Vbody. Simulations of these circuits are done using TSMC 0.35µm technology with various operating temper-atures. Current steering Digital-to-Analog Converter (CSDAC is used as test core to validate the idea. The Test core (eg.8-bit CSDAC had power consumption of 347.63 mW. LPMT circuit alone consumes power of 6.3405 mW. This technique results in reduction of leakage power of 8-bit CSDAC by 5.51mW and increases the reliability of test core. Mentor Graphics ELDO and EZ-wave are used for simulations.

  14. AIRCRAFT POWER SUPPLY SYSTEM DESIGN PROCESS AS AN AUTOMATION OBJECT

    Directory of Open Access Journals (Sweden)

    Boris V. Zhmurov

    2018-01-01

    aircraft and take into account all the requirements of the customer and the regulatory and technical documentation is its automation.Automation of the design of EPS aircraft as an optimization task involves the formalization of the object of optimization, as well as the choice of the criterion of efficiency and control actions. Under the object of optimization in this case we mean the design process of the EPS, the formalization of which includes formalization and the design object – the aircraft power supply system.

  15. Design and development on automated control system of coated fuel particle fabrication process

    International Nuclear Information System (INIS)

    Liu Malin; Shao Youlin; Liu Bing

    2013-01-01

    With the development trend of the large-scale production of the HTR coated fuel particles, the original manual control system can not meet the requirement and the automation control system of coated fuel particle fabrication in modern industrial grade is needed to develop. The comprehensive analysis aiming at successive 4-layer coating process of TRISO type coated fuel particles was carried out. It was found that the coating process could be divided into five subsystems and nine operating states. The establishment of DCS-type (distributed control system) of automation control system was proposed. According to the rigorous requirements of preparation process for coated particles, the design considerations of DCS were proposed, including the principle of coordinated control, safety and reliability, integration specification, practical and easy to use, and open and easy to update. A complete set of automation control system for coated fuel particle preparation process was manufactured based on fulfilling the requirements of these principles in manufacture practice. The automated control system was put into operation in the production of irradiated samples for HTRPM demonstration project. The experimental results prove that the system can achieve better control of coated fuel particle preparation process and meet the requirements of factory-scale production. (authors)

  16. Automated Integration of Dedicated Hardwired IP Cores in Heterogeneous MPSoCs Designed with ESPAM

    Directory of Open Access Journals (Sweden)

    Ed Deprettere

    2008-06-01

    Full Text Available This paper presents a methodology and techniques for automated integration of dedicated hardwired (HW IP cores into heterogeneous multiprocessor systems. We propose an IP core integration approach based on an HW module generation that consists of a wrapper around a predefined IP core. This approach has been implemented in a tool called ESPAM for automated multiprocessor system design, programming, and implementation. In order to keep high performance of the integrated IP cores, the structure of the IP core wrapper is devised in a way that adequately represents and efficiently implements the main characteristics of the formal model of computation, namely, Kahn process networks, we use as an underlying programming model in ESPAM. We present details about the structure of the HW module, the supported types of IP cores, and the minimum interfaces these IP cores have to provide in order to allow automated integration in heterogeneous multiprocessor systems generated by ESPAM. The ESPAM design flow, the multiprocessor platforms we consider, and the underlying programming (KPN model are introduced as well. Furthermore, we present the efficiency of our approach by applying our methodology and ESPAM tool to automatically generate, implement, and program heterogeneous multiprocessor systems that integrate dedicated IP cores and execute real-life applications.

  17. Automated Design and Analysis Tool for CLV/CEV Composite and Metallic Structural Components, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation of the proposed effort is a unique automated process for the analysis, design, and sizing of CLV/CEV composite and metallic structures. This developed...

  18. Automated Robust Maneuver Design and Optimization

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA is seeking improvements to the current technologies related to Position, Navigation and Timing. In particular, it is desired to automate precise maneuver...

  19. Microfluidic very large-scale integration for biochips: Technology, testing and fault-tolerant design

    DEFF Research Database (Denmark)

    Araci, Ismail Emre; Pop, Paul; Chakrabarty, Krishnendu

    2015-01-01

    of this paper is on continuous-flow biochips, where the basic building block is a microvalve. By combining these microvalves, more complex units such as mixers, switches, multiplexers can be built, hence the name of the technology, “microfluidic Very Large-Scale Integration” (mVLSI). A roadblock......Microfluidic biochips are replacing the conventional biochemical analyzers by integrating all the necessary functions for biochemical analysis using microfluidics. Biochips are used in many application areas, such as, in vitro diagnostics, drug discovery, biotech and ecology. The focus...... presents the state-of-the-art in the mVLSI platforms and emerging research challenges in the area of continuous-flow microfluidics, focusing on testing techniques and fault-tolerant design....

  20. Design and Implementation of a WiFi Based Home Automation System

    OpenAIRE

    Ahmed ElShafee; Karim Alaa Hamed

    2012-01-01

    This paper presents a design and prototype implementation of new home automation system that uses WiFi technology as a network infrastructure connecting its parts. The proposed system consists of two main components; the first part is the server (web server), which presents system core that manages, controls, and monitors users- home. Users and system administrator can locally (LAN) or remotely (internet) manage and control system code. Second part is hardware interface m...

  1. FireProt: web server for automated design of thermostable proteins

    Science.gov (United States)

    Musil, Milos; Stourac, Jan; Brezovsky, Jan; Prokop, Zbynek; Zendulka, Jaroslav; Martinek, Tomas

    2017-01-01

    Abstract There is a continuous interest in increasing proteins stability to enhance their usability in numerous biomedical and biotechnological applications. A number of in silico tools for the prediction of the effect of mutations on protein stability have been developed recently. However, only single-point mutations with a small effect on protein stability are typically predicted with the existing tools and have to be followed by laborious protein expression, purification, and characterization. Here, we present FireProt, a web server for the automated design of multiple-point thermostable mutant proteins that combines structural and evolutionary information in its calculation core. FireProt utilizes sixteen tools and three protein engineering strategies for making reliable protein designs. The server is complemented with interactive, easy-to-use interface that allows users to directly analyze and optionally modify designed thermostable mutants. FireProt is freely available at http://loschmidt.chemi.muni.cz/fireprot. PMID:28449074

  2. FILTRES: a 128 channels VLSI mixed front-end readout electronic development for microstrip detectors

    International Nuclear Information System (INIS)

    Anstotz, F.; Hu, Y.; Michel, J.; Sohler, J.L.; Lachartre, D.

    1998-01-01

    We present a VLSI digital-analog readout electronic chain for silicon microstrip detectors. The characteristics of this circuit have been optimized for the high resolution tracker of the CERN CMS experiment. This chip consists of 128 channels at 50 μm pitch. Each channel is composed by a charge amplifier, a CR-RC shaper, an analog memory, an analog processor, an output FIFO read out serially by a multiplexer. This chip has been processed in the radiation hard technology DMILL. This paper describes the architecture of the circuit and presents test results of the 128 channel full chain chip. (orig.)

  3. Design and Development of an Integrated Workstation Automation Hub

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Andrew; Ghatikar, Girish; Sartor, Dale; Lanzisera, Steven

    2015-03-30

    Miscellaneous Electronic Loads (MELs) account for one third of all electricity consumption in U.S. commercial buildings, and are drivers for a significant energy use in India. Many of the MEL-specific plug-load devices are concentrated at workstations in offices. The use of intelligence, and integrated controls and communications at the workstation for an Office Automation Hub – offers the opportunity to improve both energy efficiency and occupant comfort, along with services for Smart Grid operations. Software and hardware solutions are available from a wide array of vendors for the different components, but an integrated system with interoperable communications is yet to be developed and deployed. In this study, we propose system- and component-level specifications for the Office Automation Hub, their functions, and a prioritized list for the design of a proof-of-concept system. Leveraging the strength of both the U.S. and India technology sectors, this specification serves as a guide for researchers and industry in both countries to support the development, testing, and evaluation of a prototype product. Further evaluation of such integrated technologies for performance and cost is necessary to identify the potential to reduce energy consumptions in MELs and to improve occupant comfort.

  4. Automated design of DC-excited flux-switching in-wheel motor using magnetic equivalent circuits

    NARCIS (Netherlands)

    Tang, Y.; Paulides, J.J.H.; Lomonova, E.A.

    2015-01-01

    DC-excited flux-switching motors (DCEFSMs) are increasingly considered as candidate traction motors for electric vehicles due to their robust and magnet-free structure with relatively high torque density and extendable speed range. In this paper, an automated design tool based on nonlinear magnetic

  5. Transformational VLSI Design

    DEFF Research Database (Denmark)

    Rasmussen, Ole Steen

    constructed. It contains a semantical embedding of Ruby in Zermelo-Fraenkel set theory (ZF) implemented in the Isabelle theorem prover. A small subset of Ruby, called Pure Ruby, is embedded as a conservative extension of ZF and characterised by an inductive definition. Many useful structures used...

  6. An automation of design and modelling tasks in NX Siemens environment with original software - generator module

    Science.gov (United States)

    Zbiciak, M.; Grabowik, C.; Janik, W.

    2015-11-01

    Nowadays the design constructional process is almost exclusively aided with CAD/CAE/CAM systems. It is evaluated that nearly 80% of design activities have a routine nature. These design routine tasks are highly susceptible to automation. Design automation is usually made with API tools which allow building original software responsible for adding different engineering activities. In this paper the original software worked out in order to automate engineering tasks at the stage of a product geometrical shape design is presented. The elaborated software works exclusively in NX Siemens CAD/CAM/CAE environment and was prepared in Microsoft Visual Studio with application of the .NET technology and NX SNAP library. The software functionality allows designing and modelling of spur and helicoidal involute gears. Moreover, it is possible to estimate relative manufacturing costs. With the Generator module it is possible to design and model both standard and non-standard gear wheels. The main advantage of the model generated in such a way is its better representation of an involute curve in comparison to those which are drawn in specialized standard CAD systems tools. It comes from fact that usually in CAD systems an involute curve is drawn by 3 points that respond to points located on the addendum circle, the reference diameter of a gear and the base circle respectively. In the Generator module the involute curve is drawn by 11 involute points which are located on and upper the base and the addendum circles therefore 3D gear wheels models are highly accurate. Application of the Generator module makes the modelling process very rapid so that the gear wheel modelling time is reduced to several seconds. During the conducted research the analysis of differences between standard 3 points and 11 points involutes was made. The results and conclusions drawn upon analysis are shown in details.

  7. Process understanding and cooperative design. Keys to high quality automation

    International Nuclear Information System (INIS)

    Tommila, T.; Heinonen, R.

    1995-01-01

    A systematic approach to the specification of process control systems, and four practical methods supporting user participation and interdisciplinary co-operation are described. The main steps of the design approach are: (1) hierarchical decomposition of the plant to process items of different types; (2) analysis and definition of requirements and control strategies associated with each process item; (3) definition of automation degree; and (4) functional specification of the control system and its user interface. The specification language used for this step is a combination of principles found in object oriented design, structured analysis as well as new language standards for programmable controllers and open information systems. The design review methods presented include structured control strategy meetings, safety analysis of sequential controls, review of graphic displays, and a usability questionnaire for existing plants. These methods can be used to elicit users' needs and operational experience, to gain a common understanding of the process functionality, or to detect errors in design specifications or in existing systems. (8 refs., 9 figs.)

  8. The design of 3D scaffold for tissue engineering using automated scaffold design algorithm.

    Science.gov (United States)

    Mahmoud, Shahenda; Eldeib, Ayman; Samy, Sherif

    2015-06-01

    Several progresses have been introduced in the field of bone regenerative medicine. A new term tissue engineering (TE) was created. In TE, a highly porous artificial extracellular matrix or scaffold is required to accommodate cells and guide their growth in three dimensions. The design of scaffolds with desirable internal and external structure represents a challenge for TE. In this paper, we introduce a new method known as automated scaffold design (ASD) for designing a 3D scaffold with a minimum mismatches for its geometrical parameters. The method makes use of k-means clustering algorithm to separate the different tissues and hence decodes the defected bone portions. The segmented portions of different slices are registered to construct the 3D volume for the data. It also uses an isosurface rendering technique for 3D visualization of the scaffold and bones. It provides the ability to visualize the transplanted as well as the normal bone portions. The proposed system proves good performance in both the segmentation results and visualizations aspects.

  9. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  10. Design considerations on user-interaction for semi-automated driving

    NARCIS (Netherlands)

    van den Beukel, Arie Paul; van der Voort, Mascha C.

    2015-01-01

    The automotive industry has recently made first steps towards implementation of automated driving, by introducing lateral control as addition to longitudinal control (i.e. ACC). This automated control is allowed during specific situations within existing infrastructure (e.g. motorway cruising).

  11. An Automated System for Garment Texture Design Class Identification

    Directory of Open Access Journals (Sweden)

    Emon Kumar Dey

    2015-09-01

    Full Text Available Automatic identification of garment design class might play an important role in the garments and fashion industry. To achieve this, essential initial works are found in the literature. For example, construction of a garment database, automatic segmentation of garments from real life images, categorizing them into the type of garments such as shirts, jackets, tops, skirts, etc. It is now essential to find a system such that it will be possible to identify the particular design (printed, striped or single color of garment product for an automated system to recommend the garment trends. In this paper, we have focused on this specific issue and thus propose two new descriptors namely Completed CENTRIST (cCENTRIST and Ternary CENTRIST (tCENTRIST. To test these descriptors, we used two different publically available databases. The experimental results of these databases demonstrate that both cCENTRIST and tCENTRIST achieve nearly about 3% more accuracy than the existing state-of-the art methods.

  12. The design of an automated electrolytic enrichment apparatus for tritium

    Energy Technology Data Exchange (ETDEWEB)

    Myers, J.L.

    1994-12-01

    The Radiation Analytical Sciences Section at Laboratory at Lawrence Livermore National Laboratory performs analysis of low-level tritium concentrations in various natural water samples from the Tri-Valley Area, DOE Nevada Test Site, Site 300 in Tracy, CA, and other various places around the world. Low levels of tritium, a radioactive isotope of hydrogen, which is pre-concentrated in the RAS laboratory using an electrolytic enrichment apparatus. Later these enriched waters are analyzed by liquid scintillation counting to determine the activity of tritium. The enrichment procedure and the subsequent purification process by vacuum distillation are currently undertaken manually, hence being highly labor-intensive. The whole process typically takes about 2 to 3 weeks to complete a batch of 30 samples, with a dedicated personnel operating the process. The goal is to automate the entire process, specifically having the operation PC-LabVIEW{trademark} controlled with real-time monitoring capability. My involvement was in the design and fabrication of a prototypical automated electrolytic enrichment cell. Work will be done on optimizing the electrolytic process by assessing the different parameters of the enrichment procedure. Hardware and software development have also been an integral component of this project.

  13. A Sequential Circuit-Based IP Watermarking Algorithm for Multiple Scan Chains in Design-for-Test

    Directory of Open Access Journals (Sweden)

    C. Wu

    2011-06-01

    Full Text Available In Very Large Scale Integrated Circuits (VLSI design, the existing Design-for-Test(DFT based watermarking techniques usually insert watermark through reordering scan cells, which causes large resource overhead, low security and coverage rate of watermark detection. A novel scheme was proposed to watermark multiple scan chains in DFT for solving the problems. The proposed scheme adopts DFT scan test model of VLSI design, and uses a Linear Feedback Shift Register (LFSR for pseudo random test vector generation. All of the test vectors are shifted in scan input for the construction of multiple scan chains with minimum correlation. Specific registers in multiple scan chains will be changed by the watermark circuit for watermarking the design. The watermark can be effectively detected without interference with normal function of the circuit, even after the chip is packaged. The experimental results on several ISCAS benchmarks show that the proposed scheme has lower resource overhead, probability of coincidence and higher coverage rate of watermark detection by comparing with the existing methods.

  14. Adaptive Automation Based on Air Traffic Controller Decision-Making

    NARCIS (Netherlands)

    IJtsma (Student TU Delft), Martijn; Borst, C.; Mercado Velasco, G.A.; Mulder, M.; van Paassen, M.M.; Tsang, P.S.; Vidulich, M.A.

    2017-01-01

    Through smart scheduling and triggering of automation support, adaptive automation has the potential to balance air traffic controller workload. The challenge in the design of adaptive automation systems is to decide how and when the automation should provide support. This paper describes the design

  15. User interface design principles for the SSM/PMAD automated power system

    Science.gov (United States)

    Jakstas, Laura M.; Myers, Chris J.

    1991-01-01

    Martin Marietta has developed a user interface for the space station module power management and distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data from the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined. An engineer's interactions with the system are also described.

  16. A novel VLSI processor for high-rate, high resolution spectroscopy

    CERN Document Server

    Pullia, Antonio; Gatti, E; Longoni, A; Buttler, W

    2000-01-01

    A novel time-variant VLSI shaper amplifier, suitable for multi-anode Silicon Drift Detectors or other multi-element solid-state X-ray detection systems, is proposed. The new read-out scheme has been conceived for demanding applications with synchrotron light sources, such as X-ray holography or EXAFS, where both high count-rates and high-energy resolutions are required. The circuit is of the linear time-variant class, accepts randomly distributed events and features: a finite-width (1-10 mu s) quasi-optimal weight function, an ultra-low-level energy discrimination (approx 150 eV), and a full compatibility for monolithic integration in CMOS technology. Its impulse response has a staircase-like shape, but the weight function (which is in general different from the impulse response in time-variant systems) is quasi trapezoidal. The operation principles of the new scheme as well as the first experimental results obtained with a prototype of the circuit are presented and discussed in the work.

  17. Principles of VLSI RTL design a practical guide

    CERN Document Server

    Churiwala, Sanjay; Gianfagna, Mike

    2011-01-01

    This book examines the impact of register transfer level (RTL) design choices that may result in issues of testability, data synchronization across clock domains, synthesizability, power consumption and routability, that appear later in the product lifecycle.

  18. Design of an automated solar concentrator for the pyrolysis of scrap rubber

    International Nuclear Information System (INIS)

    Zeaiter, Joseph; Ahmad, Mohammad N.; Rooney, David; Samneh, Bechara; Shammas, Elie

    2015-01-01

    Highlights: • Design of a solar concentrator with high focal-point temperatures. • Development of an automated continuous solar tracking system. • Catalytic pyrolysis to convert waste rubber tire to gas and liquid products. • The liquid components had high yields of C 10 –C 29 hydrocarbons. • The gaseous components were mainly propene and cyclobutene. - Abstract: An automated solar reactor system was designed and built to carry out catalytic pyrolysis of scrap rubber tires at 550 °C. To maximize solar energy concentration, a two degrees-of-freedom automated sun tracking system was developed and implemented. Both the azimuth and zenith angles were controlled via feedback from six photo-resistors positioned on a Fresnel lens. The pyrolysis of rubber tires was tested with the presence of two types of acidic catalysts, H-beta and H-USY. Additionally, a photoactive TiO 2 catalyst was used and the products were compared in terms of gas yields and composition. The catalysts were characterized by BET analysis and the pyrolysis gases and liquids were analyzed using GC–MS. The oil and gas yields were relatively high with the highest gas yield reaching 32.8% with H-beta catalyst while TiO 2 gave the same results as thermal pyrolysis without any catalyst. In the presence of zeolites, the dominant gasoline-like components in the gas were propene and cyclobutene. The TiO 2 and non-catalytic experiments produced a gas containing gasoline-like products of mainly isoprene (76.4% and 88.4% respectively). As for the liquids they were composed of numerous components spread over a wide distribution of C 10 to C 29 hydrocarbons of naphthalene and cyclohexane/ene derivatives

  19. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  20. Proof-of-concept automation of propellant processing

    Science.gov (United States)

    Ramohalli, Kumar; Schallhorn, P. A.

    1989-01-01

    For space-based propellant production, automation of the process is needed. Currently, all phases of terrestrial production have some form of human interaction. A mixer was acquired to help perform the tasks of automation. A heating system to be used with the mixer was designed, built, and installed. Tests performed on the heating system verify design criteria. An IBM PS/2 personal computer was acquired for the future automation work. It is hoped that some the mixing process itself will be automated. This is a concept demonstration task; proving that propellant production can be automated reliably.

  1. DESIGN OF LOW EPI AND HIGH THROUGHPUT CORDIC CELL TO IMPROVE THE PERFORMANCE OF MOBILE ROBOT

    Directory of Open Access Journals (Sweden)

    P. VELRAJKUMAR

    2014-04-01

    Full Text Available This paper mainly focuses on pass logic based design, which gives an low Energy Per Instruction (EPI and high throughput COrdinate Rotation Digital Computer (CORDIC cell for application of robotic exploration. The basic components of CORDIC cell namely register, multiplexer and proposed adder is designed using pass transistor logic (PTL design. The proposed adder is implemented in bit-parallel iterative CORDIC circuit whereas designed using DSCH2 VLSI CAD tool and their layouts are generated by Microwind 3 VLSI CAD tool. The propagation delay, area and power dissipation are calculated from the simulated results for proposed adder based CORDIC cell. The EPI, throughput and effect of temperature are calculated from generated layout. The output parameter of generated layout is analysed using BSIM4 advanced analyzer. The simulated result of the proposed adder based CORDIC circuit is compared with other adder based CORDIC circuits. From the analysis of these simulated results, it was found that the proposed adder based CORDIC circuit dissipates low power, gives faster response, low EPI and high throughput.

  2. Design and Implementation of a New Real-Time Frequency Sensor Used as Hardware Countermeasure

    Directory of Open Access Journals (Sweden)

    Manuel Pedro-Carrasco

    2013-09-01

    Full Text Available A new digital countermeasure against attacks related to the clock frequency is presented. This countermeasure, known as frequency sensor, consists of a local oscillator, a transition detector, a measurement element and an output block. The countermeasure has been designed using a full-custom technique implemented in an Application-Specific Integrated Circuit (ASIC, and the implementation has been verified and characterized with an integrated design using a 0.35 mm standard Complementary Metal Oxide Semiconductor (CMOS technology (Very Large Scale Implementation—VLSI implementation. The proposed solution is configurable in resolution time and allowed range of period, achieving a minimum resolution time of only 1.91 ns and an initialization time of 5.84 ns. The proposed VLSI implementation shows better results than other solutions, such as digital ones based on semi-custom techniques and analog ones based on band pass filters, all design parameters considered. Finally, a counter has been used to verify the good performance of the countermeasure in avoiding the success of an attack.

  3. Design automation of ΔΣ switched capacitor modulators using spice and MATLAB

    Directory of Open Access Journals (Sweden)

    Mirković Dejan

    2014-01-01

    Full Text Available Concerning the fact that the design of contemporary integrated circuits (IC is practically impossible without using sophisticated Electronic Design Automation (EDA software, this paper gives some interesting thoughts and considerations about that issue. As technology processes advances on year basis consequently EDA industry is forced to follow this trend as well. This, on the other hand, requires IC designer to frequently and efficiently accommodate to new working environments. Authors of this paper suggest a method for high level circuit analysis that is based on using common (open source or low cost circuit simulators but precise and fast enough to meet requirements imposed by demanding mixed-signal blocks. The paper demonstrates the proposed EDA procedure on an example of second order ΔΣ modulator design. It illustrates considerable simulation time saving which is more than welcome in a world of analogue and mixed-signal design. [Projekat Ministarstva nauke Republike Srbije, br. TR32004: Advanced technologies for measurement, control, and communication on the electric grid

  4. Process automation

    International Nuclear Information System (INIS)

    Moser, D.R.

    1986-01-01

    Process automation technology has been pursued in the chemical processing industries and to a very limited extent in nuclear fuel reprocessing. Its effective use has been restricted in the past by the lack of diverse and reliable process instrumentation and the unavailability of sophisticated software designed for process control. The Integrated Equipment Test (IET) facility was developed by the Consolidated Fuel Reprocessing Program (CFRP) in part to demonstrate new concepts for control of advanced nuclear fuel reprocessing plants. A demonstration of fuel reprocessing equipment automation using advanced instrumentation and a modern, microprocessor-based control system is nearing completion in the facility. This facility provides for the synergistic testing of all chemical process features of a prototypical fuel reprocessing plant that can be attained with unirradiated uranium-bearing feed materials. The unique equipment and mission of the IET facility make it an ideal test bed for automation studies. This effort will provide for the demonstration of the plant automation concept and for the development of techniques for similar applications in a full-scale plant. A set of preliminary recommendations for implementing process automation has been compiled. Some of these concepts are not generally recognized or accepted. The automation work now under way in the IET facility should be useful to others in helping avoid costly mistakes because of the underutilization or misapplication of process automation. 6 figs

  5. Design and demonstration of a multitechnology FPGA for photonic information processing

    Science.gov (United States)

    Mal, Prosenjit; Hawk, Chris; Toshniwal, Kavita; Beyette, Fred R., Jr.

    2003-11-01

    We present here a novel architecture for a multi-technology field programmabler gate array (MT-FPGA). Implemented with a conventional CMOS VLSI technology the architecture is suitable for prototyping photonic information processing systems. We report here that this new FPGA architecture will enable the design of reconfigurable systems that incorporate technologies outside the traditional electronic domain.

  6. Automated procedures for sizing aerospace vehicle structures /SAVES/

    Science.gov (United States)

    Giles, G. L.; Blackburn, C. L.; Dixon, S. C.

    1972-01-01

    Results from a continuing effort to develop automated methods for structural design are described. A system of computer programs presently under development called SAVES is intended to automate the preliminary structural design of a complete aerospace vehicle. Each step in the automated design process of the SAVES system of programs is discussed, with emphasis placed on use of automated routines for generation of finite-element models. The versatility of these routines is demonstrated by structural models generated for a space shuttle orbiter, an advanced technology transport,n hydrogen fueled Mach 3 transport. Illustrative numerical results are presented for the Mach 3 transport wing.

  7. Petri Nets

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE ... In Part 1 of this two-part article, we have seen im- ..... mable logic controller and VLSI arrays, office automation systems, workflow management systems, ... complex discrete event and real-time systems; and Petri nets.

  8. Procedure automation: the effect of automated procedure execution on situation awareness and human performance

    International Nuclear Information System (INIS)

    Andresen, Gisle; Svengren, Haakan; Heimdal, Jan O.; Nilsen, Svein; Hulsund, John-Einar; Bisio, Rossella; Debroise, Xavier

    2004-04-01

    As advised by the procedure workshop convened in Halden in 2000, the Halden Project conducted an experiment on the effect of automation of Computerised Procedure Systems (CPS) on situation awareness and human performance. The expected outcome of the study was to provide input for guidance on CPS design, and to support the Halden Project's ongoing research on human reliability analysis. The experiment was performed in HAMMLAB using the HAMBO BWR simulator and the COPMA-III CPS. Eight crews of operators from Forsmark 3 and Oskarshamn 3 participated. Three research questions were investigated: 1) Does procedure automation create Out-Of-The-Loop (OOTL) performance problems? 2) Does procedure automation affect situation awareness? 3) Does procedure automation affect crew performance? The independent variable, 'procedure configuration', had four levels: paper procedures, manual CPS, automation with breaks, and full automation. The results showed that the operators experienced OOTL problems in full automation, but that situation awareness and crew performance (response time) were not affected. One possible explanation for this is that the operators monitored the automated procedure execution conscientiously, something which may have prevented the OOTL problems from having negative effects on situation awareness and crew performance. In a debriefing session, the operators clearly expressed their dislike for the full automation condition, but that automation with breaks could be suitable for some tasks. The main reason why the operators did not like the full automation was that they did not feel being in control. A qualitative analysis addressing factors contributing to response time delays revealed that OOTL problems did not seem to cause delays, but that some delays could be explained by the operators having problems with the freeze function of the CPS. Also other factors such as teamwork and operator tendencies were of importance. Several design implications were drawn

  9. Design and building of a homemade sample changer for automation of the irradiation in neutron activation analysis technique

    International Nuclear Information System (INIS)

    Gago, Javier; Hernandez, Yuri; Baltuano, Oscar; Bedregal, Patricia; Lopez, Yon; Urquizo, Rafael

    2014-01-01

    Because the RP-10 research reactor operates during weekends, it was necessary to design and build a sample changer for irradiation as part of the automation process of neutron activation analysis technique. The device is formed by an aluminum turntable disk which can accommodate 19 polyethylene capsules, containing samples to be sent using the pneumatic transfer system from the laboratory to the irradiation position. The system is operate by a control switchboard to send and return capsules in a variable preset time and by two different ways, allowing the determination of short, medium and long lived radionuclides. Also another mechanism is designed called 'exchange valve' for changing travel paths (pipelines) allowing the irradiated samples to be stored for a longer time in the reactor hall. The system design has allowed complete automation of this technique, enabling the irradiation of samples without the presence of an analyst. The design, construction and operation of the device is described and presented in this article. (authors).

  10. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    Science.gov (United States)

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  11. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  12. Laboratory automation: trajectory, technology, and tactics.

    Science.gov (United States)

    Markin, R S; Whalen, S A

    2000-05-01

    Laboratory automation is in its infancy, following a path parallel to the development of laboratory information systems in the late 1970s and early 1980s. Changes on the horizon in healthcare and clinical laboratory service that affect the delivery of laboratory results include the increasing age of the population in North America, the implementation of the Balanced Budget Act (1997), and the creation of disease management companies. Major technology drivers include outcomes optimization and phenotypically targeted drugs. Constant cost pressures in the clinical laboratory have forced diagnostic manufacturers into less than optimal profitability states. Laboratory automation can be a tool for the improvement of laboratory services and may decrease costs. The key to improvement of laboratory services is implementation of the correct automation technology. The design of this technology should be driven by required functionality. Automation design issues should be centered on the understanding of the laboratory and its relationship to healthcare delivery and the business and operational processes in the clinical laboratory. Automation design philosophy has evolved from a hardware-based approach to a software-based approach. Process control software to support repeat testing, reflex testing, and transportation management, and overall computer-integrated manufacturing approaches to laboratory automation implementation are rapidly expanding areas. It is clear that hardware and software are functionally interdependent and that the interface between the laboratory automation system and the laboratory information system is a key component. The cost-effectiveness of automation solutions suggested by vendors, however, has been difficult to evaluate because the number of automation installations are few and the precision with which operational data have been collected to determine payback is suboptimal. The trend in automation has moved from total laboratory automation to a

  13. Therese: presentation of the project

    Energy Technology Data Exchange (ETDEWEB)

    Pendibidu, J M

    1982-05-01

    Therese (Terabit Reseau) is a project built around a powerful local network with high transmission speed. A lot of goodies have been incorporated to the product in order that it can be used as a general multi-purpose tool in such areas as software engineering, artificial intelligence, robotics, office automation, VLSI design, fundamental mechanics, theoretical physics, applied mathematics, computer assisted education, high speed satellite communications, performance evaluation, general system theory, reliability, high resolution graphics, public messages, private messages, fast Fourier transform, personal computing, image processing, etc. 11 references.

  14. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  15. Autonomy, Automation, and Systems

    Science.gov (United States)

    Turner, Philip R.

    1987-02-01

    Aerospace industry interest in autonomy and automation, given fresh impetus by the national goal of establishing a Space Station, is becoming a major item of research and technology development. The promise of new technology arising from research in Artificial Intelligence (AI) has focused much attention on its potential in autonomy and automation. These technologies can improve performance in autonomous control functions that involve planning, scheduling, and fault diagnosis of complex systems. There are, however, many aspects of system and subsystem design in an autonomous system that impact AI applications, but do not directly involve AI technology. Development of a system control architecture, establishment of an operating system within the design, providing command and sensory data collection features appropriate to automated operation, and the use of design analysis tools to support system engineering are specific examples of major design issues. Aspects such as these must also receive attention and technology development support if we are to implement complex autonomous systems within the realistic limitations of mass, power, cost, and available flight-qualified technology that are all-important to a flight project.

  16. Development strategy and process models for phased automation of design and digital manufacturing electronics

    Science.gov (United States)

    Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.

    2018-03-01

    The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.

  17. High performance integer arithmetic circuit design on FPGA architecture, implementation and design automation

    CERN Document Server

    Palchaudhuri, Ayan

    2016-01-01

    This book describes the optimized implementations of several arithmetic datapath, controlpath and pseudorandom sequence generator circuits for realization of high performance arithmetic circuits targeted towards a specific family of the high-end Field Programmable Gate Arrays (FPGAs). It explores regular, modular, cascadable, and bit-sliced architectures of these circuits, by directly instantiating the target FPGA-specific primitives in the HDL. Every proposed architecture is justified with detailed mathematical analyses. Simultaneously, constrained placement of the circuit building blocks is performed, by placing the logically related hardware primitives in close proximity to one another by supplying relevant placement constraints in the Xilinx proprietary “User Constraints File”. The book covers the implementation of a GUI-based CAD tool named FlexiCore integrated with the Xilinx Integrated Software Environment (ISE) for design automation of platform-specific high-performance arithmetic circuits from us...

  18. Automated design and optimization of flexible booster autopilots via linear programming, volume 1

    Science.gov (United States)

    Hauser, F. D.

    1972-01-01

    A nonlinear programming technique was developed for the automated design and optimization of autopilots for large flexible launch vehicles. This technique, which resulted in the COEBRA program, uses the iterative application of linear programming. The method deals directly with the three main requirements of booster autopilot design: to provide (1) good response to guidance commands; (2) response to external disturbances (e.g. wind) to minimize structural bending moment loads and trajectory dispersions; and (3) stability with specified tolerances on the vehicle and flight control system parameters. The method is applicable to very high order systems (30th and greater per flight condition). Examples are provided that demonstrate the successful application of the employed algorithm to the design of autopilots for both single and multiple flight conditions.

  19. Automated Test Case Generation for an Autopilot Requirement Prototype

    Science.gov (United States)

    Giannakopoulou, Dimitra; Rungta, Neha; Feary, Michael

    2011-01-01

    Designing safety-critical automation with robust human interaction is a difficult task that is susceptible to a number of known Human-Automation Interaction (HAI) vulnerabilities. It is therefore essential to develop automated tools that provide support both in the design and rapid evaluation of such automation. The Automation Design and Evaluation Prototyping Toolset (ADEPT) enables the rapid development of an executable specification for automation behavior and user interaction. ADEPT supports a number of analysis capabilities, thus enabling the detection of HAI vulnerabilities early in the design process, when modifications are less costly. In this paper, we advocate the introduction of a new capability to model-based prototyping tools such as ADEPT. The new capability is based on symbolic execution that allows us to automatically generate quality test suites based on the system design. Symbolic execution is used to generate both user input and test oracles user input drives the testing of the system implementation, and test oracles ensure that the system behaves as designed. We present early results in the context of a component in the Autopilot system modeled in ADEPT, and discuss the challenges of test case generation in the HAI domain.

  20. Design and Achievement of User Interface Automation Testing of Linux Based on Element Tree of DogTail

    Directory of Open Access Journals (Sweden)

    Yuan Wen-Chao

    2017-01-01

    Full Text Available As Linux gets more popular around the world, the advantage of the open source on software makes people do automated UI test by unified testing framework. UI software testing can guarantee the rationality of User Interface of Linux and accuracy of the UI’s widgets. In order to set free from fuzzy and repeated manual testing, and improve efficiency, this paper achieves automation testing of UI under Linux, and proposes a method to identify and test UI widgets under Linux, which is according to element tree of DogTail automaton testing framework. It achieves automation test of UI under Linux. According to this method, Aiming at the product of Red Hat Subscription Manager under Red Hat Enterprise Linux, it designs the automation test plan of this series of product’s dialogs. After many tests, it is indicated that this plan can identify UI widgets accurately and rationally, describe the structure of software clearly, avoid software errors and improve efficiency of the software. Simultaneously, it also can be used in the internationalization testing for checking translation during software internationalization.

  1. Automated cloning methods.; TOPICAL

    International Nuclear Information System (INIS)

    Collart, F.

    2001-01-01

    Argonne has developed a series of automated protocols to generate bacterial expression clones by using a robotic system designed to be used in procedures associated with molecular biology. The system provides plate storage, temperature control from 4 to 37 C at various locations, and Biomek and Multimek pipetting stations. The automated system consists of a robot that transports sources from the active station on the automation system. Protocols for the automated generation of bacterial expression clones can be grouped into three categories (Figure 1). Fragment generation protocols are initiated on day one of the expression cloning procedure and encompass those protocols involved in generating purified coding region (PCR)

  2. Real time track finding in a drift chamber with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-01-01

    In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare the chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. (orig.)

  3. Automated Work Package: Conceptual Design and Data Architecture

    Energy Technology Data Exchange (ETDEWEB)

    Al Rashdan, Ahmad [Idaho National Lab. (INL), Idaho Falls, ID (United States); Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Agarwal, Vivek [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-05-26

    The automated work package (AWP) is one of the U.S. Department of Energy’s (DOE) Light Water Reactor Sustainability Program efforts to enhance the safety and economics of the nuclear power industry. An AWP is an adaptive and interactive work package that intelligently drives the work process according to the plant condition, resources status, and users progress. The AWP aims to automate several manual tasks of the work process to enhance human performance and reduce human errors. Electronic work packages (eWPs), studied by the Electric Power Research Institute (EPRI), are work packages that rely to various extent on electronic data processing and presentation. AWPs are the future of eWPs. They are envisioned to incorporate the advanced technologies of the future, and thus address the unresolved deficiencies associated with the eWPs in a nuclear power plant. In order to define the AWP, it is necessary to develop an ideal envisioned scenario of the future work process without any current technology restriction. The approach followed to develop this scenario is specific to every stage of the work process execution. The scenario development resulted in fifty advanced functionalities that can be part of the AWP. To rank the importance of these functionalities, a survey was conducted involving several U.S. nuclear utilities. The survey aimed at determining the current need of the nuclear industry with respect to the current work process, i.e. what the industry is satisfied with, and where the industry envisions potential for improvement. The survey evaluated the most promising functionalities resulting from the scenario development. The results demonstrated a significant desire to adopt the majority of these functionalities. The results of the survey are expected to drive the Idaho National Laboratory (INL) AWP research and development (R&D). In order to facilitate this mission, a prototype AWP is needed. Since the vast majority of earlier efforts focused on the

  4. Automated Work Package: Conceptual Design and Data Architecture

    International Nuclear Information System (INIS)

    Al Rashdan, Ahmad; Oxstrand, Johanna; Agarwal, Vivek

    2016-01-01

    The automated work package (AWP) is one of the U.S. Department of Energy's (DOE) Light Water Reactor Sustainability Program efforts to enhance the safety and economics of the nuclear power industry. An AWP is an adaptive and interactive work package that intelligently drives the work process according to the plant condition, resources status, and users progress. The AWP aims to automate several manual tasks of the work process to enhance human performance and reduce human errors. Electronic work packages (eWPs), studied by the Electric Power Research Institute (EPRI), are work packages that rely to various extent on electronic data processing and presentation. AWPs are the future of eWPs. They are envisioned to incorporate the advanced technologies of the future, and thus address the unresolved deficiencies associated with the eWPs in a nuclear power plant. In order to define the AWP, it is necessary to develop an ideal envisioned scenario of the future work process without any current technology restriction. The approach followed to develop this scenario is specific to every stage of the work process execution. The scenario development resulted in fifty advanced functionalities that can be part of the AWP. To rank the importance of these functionalities, a survey was conducted involving several U.S. nuclear utilities. The survey aimed at determining the current need of the nuclear industry with respect to the current work process, i.e. what the industry is satisfied with, and where the industry envisions potential for improvement. The survey evaluated the most promising functionalities resulting from the scenario development. The results demonstrated a significant desire to adopt the majority of these functionalities. The results of the survey are expected to drive the Idaho National Laboratory (INL) AWP research and development (R&D). In order to facilitate this mission, a prototype AWP is needed. Since the vast majority of earlier efforts focused on the frontend

  5. A Multi-Agent-Based Intelligent Sensor and Actuator Network Design for Smart House and Home Automation

    Directory of Open Access Journals (Sweden)

    Fei Hu

    2013-08-01

    Full Text Available The smart-house technology aims to increase home automation and security with reduced energy consumption. A smart house consists of various intelligent sensors and actuators operating on different platforms with conflicting objectives. This paper proposes a multi-agent system (MAS design framework to achieve smart house automation. The novelties of this work include the developments of (1 belief, desire and intention (BDI agent behavior models; (2 a regulation policy-based multi-agent collaboration mechanism; and (3 a set of metrics for MAS performance evaluation. Simulations of case studies are performed using the Java Agent Development Environment (JADE to demonstrate the advantages of the proposed method.

  6. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    Science.gov (United States)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  7. SP-100 shield design automation process using expert system and heuristic search techniques

    International Nuclear Information System (INIS)

    Marcille, T.F.; Protsik, R.; Deane, N.A.; Hoover, D.G.

    1993-01-01

    The SP-100 shield subsystem design process has been modified to utilize the GE Corporate Reserch and Development program, ENGINEOUS (Tong 1990). ENGINEOUS is a software system that automates the use of Computer Aided Engineering (CAE) analysis programs in the engineering design process. The shield subsystem design process incorporates a nuclear subsystems design and performance code, a two-dimensional neutral particle transport code, several input processors and two general purpose neutronic output processors. Coupling these programs within ENGINEOUS provides automatic transition paths between applications, with no source code modifications. ENGINEOUS captures human design knowledge, as well as information about the specific CAE applications and stores this information in knowledge base files. The knowledge base information is used by the ENGINEOUS expert system to drive knowledge directed and knowledge supplemented search modules to find an optimum shield design for a given reactor definition, ensuring that specified constraints are satisfied. Alternate designs, not accommodated in the optimization design rules, can readily be explored through the use of a parametric study capability

  8. Contaminant analysis automation, an overview

    International Nuclear Information System (INIS)

    Hollen, R.; Ramos, O. Jr.

    1996-01-01

    To meet the environmental restoration and waste minimization goals of government and industry, several government laboratories, universities, and private companies have formed the Contaminant Analysis Automation (CAA) team. The goal of this consortium is to design and fabricate robotics systems that standardize and automate the hardware and software of the most common environmental chemical methods. In essence, the CAA team takes conventional, regulatory- approved (EPA Methods) chemical analysis processes and automates them. The automation consists of standard laboratory modules (SLMs) that perform the work in a much more efficient, accurate, and cost- effective manner

  9. Personal computer based home automation system

    OpenAIRE

    Hellmuth, George F.

    1993-01-01

    The systems engineering process is applied in the development of the preliminary design of a home automation communication protocol. The objective of the communication protocol is to provide a means for a personal computer to communicate with adapted appliances in the home. A needs analysis is used to ascertain that a need exist for a home automation system. Numerous design alternatives are suggested and evaluated to determine the best possible protocol design. Coaxial cable...

  10. Automation from pictures

    International Nuclear Information System (INIS)

    Kozubal, A.J.

    1992-01-01

    The state transition diagram (STD) model has been helpful in the design of real time software, especially with the emergence of graphical computer aided software engineering (CASE) tools. Nevertheless, the translation of the STD to real time code has in the past been primarily a manual task. At Los Alamos we have automated this process. The designer constructs the STD using a CASE tool (Cadre Teamwork) using a special notation for events and actions. A translator converts the STD into an intermediate state notation language (SNL), and this SNL is compiled directly into C code (a state program). Execution of the state program is driven by external events, allowing multiple state programs to effectively share the resources of the host processor. Since the design and the code are tightly integrated through the CASE tool, the design and code never diverge, and we avoid design obsolescence. Furthermore, the CASE tool automates the production of formal technical documents from the graphic description encapsulated by the CASE tool. (author)

  11. CAPCAL, 3-D Capacitance Calculator for VLSI Purposes

    International Nuclear Information System (INIS)

    Seidl, Albert; Klose, Helmut; Svoboda, Mildos

    2004-01-01

    1 - Description of program or function: CAPCAL is devoted to the calculation of capacitances of three-dimensional wiring configurations are typically used in VLSI circuits. Due to analogies in the mathematical description also conductance and heat transport problems can be treated by CAPCAL. To handle the problem using CAPCAL same approximations have to be applied to the structure under investigation: - the overall geometry has to be confined to a finite domain by using symmetry-properties of the problem - Non-rectangular structures have to be simplified into an artwork of multiple boxes. 2 - Method of solution: The electrical field is described by the Laplace-equation. The differential equation is discretized by using the finite difference method. NEA-1327/01: The linear equation system is solved by using a combined ADI-multigrid method. NEA-1327/04: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. NEA-1327/05: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. 3 - Restrictions on the complexity of the problem: NEA-1327/01: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements which can easily by modified. If the geometry of the problem is defined such that Neumann boundaries are dominating the convergence of the iterative equation system solver is affected

  12. BREED: a CDC-7600 computer program for the automation of breeder reactor design analysis (LWBR Development Program)

    International Nuclear Information System (INIS)

    Candelore, N.R.; Maher, C.M.

    1985-03-01

    BREED is an executive CDC-7600 program which was developed to facilitate the sequence of calculations and movement of data through a prescribed series of breeder reactor design computer programs in an uninterrupted single-job mode. It provides the capability to interface different application programs into a single computer run to provide a complete design function. The automation that can be achieved as a result of using BREED significantly reduces not only the time required for data preparation and hand transfer of data, but also the time required to complete an iteration of the total design effort. Data processing within a technical discipline and data transfer between technical disciplines can be accommodated. The input/output data processing is achieved with BREED by using a set of simple, easily understood user commands, usually short descriptive words, which the user inserts in his input deck. The input deck completely identifies and controls the calculational sequence needed to produce a desired end product. This report has been prepared to provide instructional material on the use of BREED and its user-oriented procedures to facilitate computer automation of design calculations

  13. CMOS VLSI Active-Pixel Sensor for Tracking

    Science.gov (United States)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The

  14. Automated Work Package: Initial Wireless Communication Platform Design, Development, and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Al Rashdan, Ahmad Yahya Mohammad [Idaho National Laboratory; Agarwal, Vivek [Idaho National Laboratory

    2016-03-01

    The Department of Energy’s Light Water Reactor Sustainability Program is developing the scientific basis to ensure long-term reliability, productivity, safety, and security of the nuclear power industry in the United States. The Instrumentation, Information, and Control (II&C) pathway of the program aims to increase the role of advanced II&C technologies to achieve this objective. One of the pathway efforts at Idaho National Laboratory (INL) is to improve the work packages execution process by replacing the expensive, inefficient, bulky, complex, and error-prone paper-based work orders with automated work packages (AWPs). An AWP is an automated and dynamic presentation of the work package designed to guide the user through the work process. It is loaded on a mobile device, such as a tablet, and is capable of communicating with plant equipment and systems to acquire plant and procedure states. The AWP replaces those functions where a computer is more efficient and reliable than a human. To enable the automatic acquisition of plant data, it is necessary to design and develop a prototype platform for data exchange between the field instruments and the AWP mobile devices. The development of the platform aims to reveal issues and solutions generalizable to large-scale implementation of a similar system. Topics such as bandwidth, robustness, response time, interference, and security are usually associated with wireless communication. These concerns, along with other requirements, are listed in an earlier INL report. Specifically, the targeted issues and performance aspects in this work are relevant to the communication infrastructure from the perspective of promptness, robustness, expandability, and interoperability with different technologies.

  15. Automated PCB Inspection System

    Directory of Open Access Journals (Sweden)

    Syed Usama BUKHARI

    2017-05-01

    Full Text Available Development of an automated PCB inspection system as per the need of industry is a challenging task. In this paper a case study is presented, to exhibit, a proposed system for an immigration process of a manual PCB inspection system to an automated PCB inspection system, with a minimal intervention on the existing production flow, for a leading automotive manufacturing company. A detailed design of the system, based on computer vision followed by testing and analysis was proposed, in order to aid the manufacturer in the process of automation.

  16. The automated design of materials far from equilibrium

    Science.gov (United States)

    Miskin, Marc Z.

    Automated design is emerging as a powerful concept in materials science. By combining computer algorithms, simulations, and experimental data, new techniques are being developed that start with high level functional requirements and identify the ideal materials that achieve them. This represents a radically different picture of how materials become functional in which technological demand drives material discovery, rather than the other way around. At the frontiers of this field, materials systems previously considered too complicated can start to be controlled and understood. Particularly promising are materials far from equilibrium. Material robustness, high strength, self-healing and memory are properties displayed by several materials systems that are intrinsically out of equilibrium. These and other properties could be revolutionary, provided they can first be controlled. This thesis conceptualizes and implements a framework for designing materials that are far from equilibrium. We show how, even in the absence of a complete physical theory, design from the top down is possible and lends itself to producing physical insight. As a prototype system, we work with granular materials: collections of athermal, macroscopic identical objects, since these materials function both as an essential component of industrial processes as well as a model system for many non-equilibrium states of matter. We show that by placing granular materials in the context of design, benefits emerge simultaneously for fundamental and applied interests. As first steps, we use our framework to design granular aggregates with extreme properties like high stiffness, and softness. We demonstrate control over nonlinear effects by producing exotic aggregates that stiffen under compression. Expanding on our framework, we conceptualize new ways of thinking about material design when automatic discovery is possible. We show how to build rules that link particle shapes to arbitrary granular packing

  17. A Hybrid Genetic Programming Algorithm for Automated Design of Dispatching Rules.

    Science.gov (United States)

    Nguyen, Su; Mei, Yi; Xue, Bing; Zhang, Mengjie

    2018-06-04

    Designing effective dispatching rules for production systems is a difficult and timeconsuming task if it is done manually. In the last decade, the growth of computing power, advanced machine learning, and optimisation techniques has made the automated design of dispatching rules possible and automatically discovered rules are competitive or outperform existing rules developed by researchers. Genetic programming is one of the most popular approaches to discovering dispatching rules in the literature, especially for complex production systems. However, the large heuristic search space may restrict genetic programming from finding near optimal dispatching rules. This paper develops a new hybrid genetic programming algorithm for dynamic job shop scheduling based on a new representation, a new local search heuristic, and efficient fitness evaluators. Experiments show that the new method is effective regarding the quality of evolved rules. Moreover, evolved rules are also significantly smaller and contain more relevant attributes.

  18. Automated design system for a rotor with an ellipse lobe profile

    International Nuclear Information System (INIS)

    Jung, Sung Yuen; Kim Chul; Han, Seung Moo; Cho, Hae Yong

    2009-01-01

    An internal lobe pump (ILP) is suitable for machine tool oil hydraulics, automotive engines, compressors, and various other devices. In particular, the ILP is an essential component of an automotive engine, used to feed lubricant oil through the system. The main components of an ILP are its rotors. The outer rotor is typically characterized by lobes with an elliptical shape, and the inner rotor profile is a conjugate to the outer profile. This paper describes a theoretical analysis of an ILP and the development of an integrated automated system for rotor design. This system is composed of three main modules and has been developed using AutoLISP for the AutoCAD program. The system generates a new lobe profile and automatically calculates flow rate and flow rate irregularity according to the lobe profile generated. Results obtained from the analysis can enable oil pump designers and manufacturers to become more efficient

  19. Automated design system for a rotor with an ellipse lobe profile

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Sung Yuen; Kim Chul [Pusan National University, Busan (Korea, Republic of); Han, Seung Moo [Kyung Hee University, Seoul (Korea, Republic of); Cho, Hae Yong [Chungbuk National University, Cheongju (Korea, Republic of)

    2009-11-15

    An internal lobe pump (ILP) is suitable for machine tool oil hydraulics, automotive engines, compressors, and various other devices. In particular, the ILP is an essential component of an automotive engine, used to feed lubricant oil through the system. The main components of an ILP are its rotors. The outer rotor is typically characterized by lobes with an elliptical shape, and the inner rotor profile is a conjugate to the outer profile. This paper describes a theoretical analysis of an ILP and the development of an integrated automated system for rotor design. This system is composed of three main modules and has been developed using AutoLISP for the AutoCAD program. The system generates a new lobe profile and automatically calculates flow rate and flow rate irregularity according to the lobe profile generated. Results obtained from the analysis can enable oil pump designers and manufacturers to become more efficient

  20. Bioreactor design for successive culture of anchorage-dependent cells operated in an automated manner.

    Science.gov (United States)

    Kino-Oka, Masahiro; Ogawa, Natsuki; Umegaki, Ryota; Taya, Masahito

    2005-01-01

    A novel bioreactor system was designed to perform a series of batchwise cultures of anchorage-dependent cells by means of automated operations of medium change and passage for cell transfer. The experimental data on contamination frequency ensured the biological cleanliness in the bioreactor system, which facilitated the operations in a closed environment, as compared with that in flask culture system with manual handlings. In addition, the tools for growth prediction (based on growth kinetics) and real-time growth monitoring by measurement of medium components (based on small-volume analyzing machinery) were installed into the bioreactor system to schedule the operations of medium change and passage and to confirm that culture proceeds as scheduled, respectively. The successive culture of anchorage-dependent cells was conducted with the bioreactor running in an automated way. The automated bioreactor gave a successful culture performance with fair accordance to preset scheduling based on the information in the latest subculture, realizing 79- fold cell expansion for 169 h. In addition, the correlation factor between experimental data and scheduled values through the bioreactor performance was 0.998. It was concluded that the proposed bioreactor with the integration of the prediction and monitoring tools could offer a feasible system for the manufacturing process of cultured tissue products.

  1. Adaptive and Adaptable Automation Design: A Critical Review of the Literature and Recommendations for Future Research

    Science.gov (United States)

    Prinzel, Lawrence J., III; Kaber, David B.

    2006-01-01

    This report presents a review of literature on approaches to adaptive and adaptable task/function allocation and adaptive interface technologies for effective human management of complex systems that are likely to be issues for the Next Generation Air Transportation System, and a focus of research under the Aviation Safety Program, Integrated Intelligent Flight Deck Project. Contemporary literature retrieved from an online database search is summarized and integrated. The major topics include the effects of delegation-type, adaptable automation on human performance, workload and situation awareness, the effectiveness of various automation invocation philosophies and strategies to function allocation in adaptive systems, and the role of user modeling in adaptive interface design and the performance implications of adaptive interface technology.

  2. Plant automation-application to SBWR project

    International Nuclear Information System (INIS)

    Rodriguez Rodriguez, C.

    1995-01-01

    In accordance with the requirements set out in the URD (Utility Requirements Document) issued by the EPRI (Electrical Power Research Institute), the design of new reactors, whether evolutionary or passive, shall taken into account the systematic automation of functions relating to normal plant operation. The objectives established are to: =2E Simplify operator-performed tasks =2E Reduce the risk of operator-error by considering human factors in the allocation of tasks =2E Improve man-machine reliability =2E Increase the availability of the plant In previous designs, automation has only been considered from the point of view of compliance with regulatory requirements for safety-related systems, or in isolated cases, as a method of protecting the investment where there is a risk of damage to main equipment. The use of digital technology has prevented the systematic pursuit of such objectives in the design of automated systems for processes associated with normal plant operation (startup, load follow, normal shutdown, etc) from being excessively complex and therefore costly to undertake. This paper describes how the automation of the aforementioned normal plant operation activities has been approached in General Electric's SBWR (Simplified Boiling Water Reactor) design. (Author)

  3. Development and Design of a User Interface for a Computer Automated Heating, Ventilation, and Air Conditioning System

    International Nuclear Information System (INIS)

    Anderson, B.

    1999-01-01

    A user interface is created to monitor and operate the heating, ventilation, and air conditioning system. The interface is networked to the system's programmable logic controller. The controller maintains automated control of the system. The user through the interface is able to see the status of the system and override or adjust the automatic control features. The interface is programmed to show digital readouts of system equipment as well as visual queues of system operational statuses. It also provides information for system design and component interaction. The interface is made easier to read by simple designs, color coordination, and graphics. Fermi National Accelerator Laboratory (Fermi lab) conducts high energy particle physics research. Part of this research involves collision experiments with protons, and anti-protons. These interactions are contained within one of two massive detectors along Fermilab's largest particle accelerator the Tevatron. The D-Zero Assembly Building houses one of these detectors. At this time detector systems are being upgraded for a second experiment run, titled Run II. Unlike the previous run, systems at D-Zero must be computer automated so operators do not have to continually monitor and adjust these systems during the run. Human intervention should only be necessary for system start up and shut down, and equipment failure. Part of this upgrade includes the heating, ventilation, and air conditioning system (HVAC system). The HVAC system is responsible for controlling two subsystems, the air temperatures of the D-Zero Assembly Building and associated collision hall, as well as six separate water systems used in the heating and cooling of the air and detector components. The BYAC system is automated by a programmable logic controller. In order to provide system monitoring and operator control a user interface is required. This paper will address methods and strategies used to design and implement an effective user interface

  4. DIDACTIC AUTOMATED STATION OF COMPLEX KINEMATICS

    Directory of Open Access Journals (Sweden)

    Mariusz Sosnowski

    2014-03-01

    Full Text Available The paper presents the design, control system and software that controls the automated station of complex kinematics. Control interface and software has been developed and manufactured in the West Pomeranian University of Technology in Szczecin in the Department of Automated Manufacturing Systems Engineering and Quality. Conducting classes designed to teach programming and design of structures and systems for monitoring the robot kinematic components with non-standard structures was the reason for installation of the control system and software.

  5. Automated optical assembly

    Science.gov (United States)

    Bala, John L.

    1995-08-01

    Automation and polymer science represent fundamental new technologies which can be directed toward realizing the goal of establishing a domestic, world-class, commercial optics business. Use of innovative optical designs using precision polymer optics will enable the US to play a vital role in the next generation of commercial optical products. The increased cost savings inherent in the utilization of optical-grade polymers outweighs almost every advantage of using glass for high volume situations. Optical designers must gain experience with combined refractive/diffractive designs and broaden their knowledge base regarding polymer technology beyond a cursory intellectual exercise. Implementation of a fully automated assembly system, combined with utilization of polymer optics, constitutes the type of integrated manufacturing process which will enable the US to successfully compete with the low-cost labor employed in the Far East, as well as to produce an equivalent product.

  6. Conceptual Design and Feasibility Analyses of a Robotic System for Automated Exterior Wall Painting

    Directory of Open Access Journals (Sweden)

    Young S. Kim

    2008-11-01

    Full Text Available There are approximately 6,677,000 apartment housing units in South Korea. Exterior wall painting for such multi-dwelling apartment housings in South Korea represents a typical area to which construction automation technology can be applied for improvement in safety, productivity, quality, and cost over the conventional method. The conventional exterior wall painting is costly and labor-intensive, and it especially exposes workers to significant health and safety risks. The primary objective of this study is to design a conceptual model of an exterior wall painting robot which is applicable to apartment housing construction and maintenance, and to conduct its technical?economical feasibility analyses. In this study, a design concept using a high ladder truck is proposed as the best alternative for automation of the exterior wall painting. Conclusions made in this study show that the proposed exterior wall painting robot is technically and economically feasible, and can greatly enhance safety, productivity, and quality compared to the conventional method. Finally, it is expected that the conceptual model of the exterior wall painting robot would be efficiently used in various applications in exterior wall finishing and maintenance of other architectural and civil structures such as commercial buildings, towers, and high-rise storage tanks.

  7. Layered distributed architecture for plant automation

    International Nuclear Information System (INIS)

    Aravamuthan, G.; Verma, Yachika; Ranjan, Jyoti; Chachondia, Alka S.; Ganesh, G.

    2005-01-01

    The development of plant automation system and associated software remains one of the greatest challenges to the widespread implementation of highly adaptive re-configurable automation technology. This paper presents a layered distributed architecture for a plant automation system designed to support rapid reconfiguration and redeployment of automation components. The paper first presents evolution of automation architecture and their associated environment in the past few decades and then presents the concept of layered system architecture and the use of automation components to support the construction of a wide variety of automation system. It also highlights the role of standards and technology, which can be used in the development of automation components. We have attempted to adhere to open standards and technology for the development of automation component at a various layers. It also highlights the application of this concept in the development of an Operator Information System (OIS) for Advanced Heavy Water Reactor (AHWR). (author)

  8. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    Science.gov (United States)

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  9. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  10. Design and implementation of an automated email notification system for results of tests pending at discharge.

    Science.gov (United States)

    Dalal, Anuj K; Schnipper, Jeffrey L; Poon, Eric G; Williams, Deborah H; Rossi-Roh, Kathleen; Macleay, Allison; Liang, Catherine L; Nolido, Nyryan; Budris, Jonas; Bates, David W; Roy, Christopher L

    2012-01-01

    Physicians are often unaware of the results of tests pending at discharge (TPADs). The authors designed and implemented an automated system to notify the responsible inpatient physician of the finalized results of TPADs using secure, network email. The system coordinates a series of electronic events triggered by the discharge time stamp and sends an email to the identified discharging attending physician once finalized results are available. A carbon copy is sent to the primary care physicians in order to facilitate communication and the subsequent transfer of responsibility. Logic was incorporated to suppress selected tests and to limit notification volume. The system was activated for patients with TPADs discharged by randomly selected inpatient-attending physicians during a 6-month pilot. They received approximately 1.6 email notifications per discharged patient with TPADs. Eighty-four per cent of inpatient-attending physicians receiving automated email notifications stated that they were satisfied with the system in a brief survey (59% survey response rate). Automated email notification is a useful strategy for managing results of TPADs.

  11. Virtual Diagnostic Sensors Design for an Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Ralf Stetter

    2018-05-01

    Full Text Available In recent years, Automated Guided Vehicles (AGVs have been playing an increasingly important role in producing industry and infrastructure and will soon arrive to other areas of human life such as the transportation of goods and people. However, several challenges still aggravate the operation of AGVs, which limit the amount of implementation. One major challenge is the realization of reliable sensors that can capture the different aspects of the state of an AGV as well as its surroundings. One promising approach towards more reliable sensors is the supplementary application of virtual sensors, which are able to generate virtual measurements by using other sources of information such as actuator states and already existing sensors together with appropriate mathematical models. The focus of the research described in this paper is the design of virtual sensors determining forces and torques acting on an AGV. The proposed novel approach is using a quadratic boundedness approach, which makes it possible to include bounded disturbances acting on the AGV. One major advantage of the presented approach is that the use of complex tire models can be avoided. Information from acceleration and yaw rate sensors is processed in order to realize reliable virtual force and torque sensors. The resulting force and torque information can be used for several diagnostic purposes such as fault detection or fault prevention. The presented approach is explained and verified on the basis of an innovative design of an AGV. This innovative design addresses another major challenge for AGVs, which is the limited maneuvering possibilities of many AGV designs. The innovative design allows nearly unlimited maneuvering possibilities but requires reliable sensor data. The application of the approach in the AGV resulted in the insight that the generated estimates are consistent with the longitudinal forces and torques obtained by a proven reference model.

  12. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Science.gov (United States)

    McEwan, Alistair; van Schaik, André

    2003-12-01

    The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  13. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.

    Science.gov (United States)

    Yu, T; Sejnowski, T J; Cauwenberghs, G

    2011-10-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.

  14. A neuromorphic VLSI device for implementing 2-D selective attention systems.

    Science.gov (United States)

    Indiveri, G

    2001-01-01

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.

  15. Employment Opportunities for the Handicapped in Programmable Automation.

    Science.gov (United States)

    Swift, Richard; Leneway, Robert

    A Computer Integrated Manufacturing System may make it possible for severely disabled people to custom design, machine, and manufacture either wood or metal parts. Programmable automation merges computer aided design, computer aided manufacturing, computer aided engineering, and computer integrated manufacturing systems with automated production…

  16. Embedded design based virtual instrument program for positron beam automation

    International Nuclear Information System (INIS)

    Jayapandian, J.; Gururaj, K.; Abhaya, S.; Parimala, J.; Amarendra, G.

    2008-01-01

    Automation of positron beam experiment with a single chip embedded design using a programmable system on chip (PSoC) which provides easy interfacing of the high-voltage DC power supply is reported. Virtual Instrument (VI) control program written in Visual Basic 6.0 ensures the following functions (i) adjusting of sample high voltage by interacting with the programmed PSoC hardware, (ii) control of personal computer (PC) based multi channel analyzer (MCA) card for energy spectroscopy, (iii) analysis of the obtained spectrum to extract the relevant line shape parameters, (iv) plotting of relevant parameters and (v) saving the file in the appropriate format. The present study highlights the hardware features of the PSoC hardware module as well as the control of MCA and other units through programming in Visual Basic

  17. Automated controlled-potential coulometric determination of uranium

    International Nuclear Information System (INIS)

    Knight, C.H.; Clegg, D.E.; Wright, K.D.; Cassidy, R.M.

    1982-06-01

    A controlled-potential coulometer has been automated in our laboratory for routine determination of uranium in solution. The CRNL-designed automated system controls degassing, prereduction, and reduction of the sample. The final result is displayed on a digital coulometer readout. Manual and automated modes of operation are compared to show the precision and accuracy of the automated system. Results are also shown for the coulometric titration of typical uranium-aluminum alloy samples

  18. Highway Electrification And Automation

    OpenAIRE

    Shladover, Steven E.

    1992-01-01

    This report addresses how the California Department of Transportation and the California PATH Program have made efforts to evaluate the feasibility and applicability of highway electrification and automation technologies. In addition to describing how the work was conducted, the report also describes the findings on highway electrification and highway automation, with experimental results, design study results, and a region-wide application impacts study for Los Angeles.

  19. Automated Theorem Proving in High-Quality Software Design

    Science.gov (United States)

    Schumann, Johann; Swanson, Keith (Technical Monitor)

    2001-01-01

    The amount and complexity of software developed during the last few years has increased tremendously. In particular, programs are being used more and more in embedded systems (from car-brakes to plant-control). Many of these applications are safety-relevant, i.e. a malfunction of hardware or software can cause severe damage or loss. Tremendous risks are typically present in the area of aviation, (nuclear) power plants or (chemical) plant control. Here, even small problems can lead to thousands of casualties and huge financial losses. Large financial risks also exist when computer systems are used in the area of telecommunication (telephone, electronic commerce) or space exploration. Computer applications in this area are not only subject to safety considerations, but also security issues are important. All these systems must be designed and developed to guarantee high quality with respect to safety and security. Even in an industrial setting which is (or at least should be) aware of the high requirements in Software Engineering, many incidents occur. For example, the Warshaw Airbus crash, was caused by an incomplete requirements specification. Uncontrolled reuse of an Ariane 4 software module was the reason for the Ariane 5 disaster. Some recent incidents in the telecommunication area, like illegal "cloning" of smart-cards of D2GSM handies, or the extraction of (secret) passwords from German T-online users show that also in this area serious flaws can happen. Due to the inherent complexity of computer systems, most authors claim that only a rigorous application of formal methods in all stages of the software life cycle can ensure high quality of the software and lead to real safe and secure systems. In this paper, we will have a look, in how far automated theorem proving can contribute to a more widespread application of formal methods and their tools, and what automated theorem provers (ATPs) must provide in order to be useful.

  20. USB port compatible virtual instrument based automation for x-ray diffractometer setup

    International Nuclear Information System (INIS)

    Jayapandian, J.; Sheela, O.K.; Mallika, R.; Thiruarul, A.; Purniah, B.

    2004-01-01

    Windows based virtual instrument (VI) programs in graphic language simplify the design automation in R and D laboratories. With minimal hardware and maximum support of software, the automation becomes easier and user friendly. A novel design approach for the automation of SIEMENS make x-ray diffractometer setup is described in this paper. The automation is achieved with an indigenously developed virtual instrument program in labVIEW ver.6.0 and with a simple hardware design using 89C2051 micro-controller compatible with PC's USB port for the total automation of the experiment. (author)

  1. Improving Usefulness of Automated Driving by Lowering Primary Task Interference through HMI Design

    Directory of Open Access Journals (Sweden)

    Frederik Naujoks

    2017-01-01

    Full Text Available During conditionally automated driving (CAD, driving time can be used for non-driving-related tasks (NDRTs. To increase safety and comfort of an automated ride, upcoming automated manoeuvres such as lane changes or speed adaptations may be communicated to the driver. However, as the driver’s primary task consists of performing NDRTs, they might prefer to be informed in a nondistracting way. In this paper, the potential of using speech output to improve human-automation interaction is explored. A sample of 17 participants completed different situations which involved communication between the automation and the driver in a motion-based driving simulator. The Human-Machine Interface (HMI of the automated driving system consisted of a visual-auditory HMI with either generic auditory feedback (i.e., standard information tones or additional speech output. The drivers were asked to perform a common NDRT during the drive. Compared to generic auditory output, communicating upcoming automated manoeuvres additionally by speech led to a decrease in self-reported visual workload and decreased monitoring of the visual HMI. However, interruptions of the NDRT were not affected by additional speech output. Participants clearly favoured the HMI with additional speech-based output, demonstrating the potential of speech to enhance usefulness and acceptance of automated vehicles.

  2. Human-centred automation: an explorative study

    International Nuclear Information System (INIS)

    Hollnagel, Erik; Miberg, Ann Britt

    1999-05-01

    The purpose of the programme activity on human-centred automation at the HRP is to develop knowledge (in the form of models and theories) and tools (in the form of techniques and simulators) to support design of automation that ensures effective human performance and comprehension. This report presents the work done on both the analytical and experimental side of this project. The analytical work has surveyed common definitions of automation and traditional design principles. A general finding is that human-centred automation usually is defined in terms of what it is not. This is partly due to a lack of adequate models and of human-automation interaction. Another result is a clarification of the consequences of automation, in particular with regard to situation awareness and workload. The experimental work has taken place as an explorative experiment in HAMMLAB in collaboration with IPSN (France). The purpose of this experiment was to increase the understanding of how automation influences operator performance in NPP control rooms. Two different types of automation (extensive and limited) were considered in scenarios having two different degrees of complexity (high and low), and involving diagnostic and procedural tasks. Six licensed NPP crews from the NPP at Loviisa, Finland, participated in the experiment. The dependent variables applied were plant performance, operator performance, self-rated crew performance, situation awareness, workload, and operator trust in the automation. The results from the diagnostic scenarios indicated that operators' judgement of crew efficiency was related to their level of trust in the automation, and further that operators trusted automation least and rated crew performance lowest in situations where crew performance was efficient and vice versa. The results from procedural scenarios indicated that extensive automation efficiently supported operators' performance, and further that operator' judgement of crew performance efficiency

  3. Toward a human-centered aircraft automation philosophy

    Science.gov (United States)

    Billings, Charles E.

    1989-01-01

    The evolution of automation in civil aircraft is examined in order to discern trends in the respective roles and functions of automation technology and the humans who operate these aircraft. The effects of advances in automation technology on crew reaction is considered and it appears that, though automation may well have decreased the frequency of certain types of human errors in flight, it may also have enabled new categories of human errors, some perhaps less obvious and therefore more serious than those it has alleviated. It is suggested that automation could be designed to keep the pilot closer to the control of the vehicle, while providing an array of information management and aiding functions designed to provide the pilot with data regarding flight replanning, degraded system operation, and the operational status and limits of the aircraft, its systems, and the physical and operational environment. The automation would serve as the pilot's assistant, providing and calculating data, watching for the unexpected, and keeping track of resources and their rate of expenditure.

  4. Automated processing of endoscopic surgical instruments.

    Science.gov (United States)

    Roth, K; Sieber, J P; Schrimm, H; Heeg, P; Buess, G

    1994-10-01

    This paper deals with the requirements for automated processing of endoscopic surgical instruments. After a brief analysis of the current problems, solutions are discussed. Test-procedures have been developed to validate the automated processing, so that the cleaning results are guaranteed and reproducable. Also a device for testing and cleaning was designed together with Netzsch Newamatic and PCI, called TC-MIC, to automate processing and reduce manual work.

  5. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images

    International Nuclear Information System (INIS)

    Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza

    2014-01-01

    Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods

  6. Future Control and Automation : Proceedings of the 2nd International Conference on Future Control and Automation

    CERN Document Server

    2012-01-01

    This volume Future Control and Automation- Volume 2 includes best papers from 2012 2nd International Conference on Future Control and Automation (ICFCA 2012) held on July 1-2, 2012, Changsha, China. Future control and automation is the use of control systems and information technologies to reduce the need for human work in the production of goods and services. This volume can be divided into six sessions on the basis of the classification of manuscripts considered, which is listed as follows: Mathematical Modeling, Analysis and Computation, Control Engineering, Reliable Networks Design, Vehicular Communications and Networking, Automation and Mechatronics.

  7. State-of-the-art assessment of testing and testability of custom LSI/VLSI circuits. Volume 8: Fault simulation

    Science.gov (United States)

    Breuer, M. A.; Carlan, A. J.

    1982-10-01

    Fault simulation is widely used by industry in such applications as scoring the fault coverage of test sequences and construction of fault dictionaries. For use in testing VLSI circuits a simulator is evaluated by its accuracy, i.e., modelling capability. To be accurate simulators must employ multi-valued logic in order to represent unknown signal values, impedance, signal transitions, etc., circuit delays such as transport rise/fall, inertial, and the fault modes it is capable of handling. Of the three basic fault simulators now in use (parallel, deductive and concurrent) concurrent fault simulation appears most promising.

  8. Automation U.S.A.: Overcoming Barriers to Automation.

    Science.gov (United States)

    Brody, Herb

    1985-01-01

    Although labor unions and inadequate technology play minor roles, the principal barrier to factory automation is "fear of change." Related problems include long-term benefits, nontechnical executives, and uncertainty of factory cost accounting. Industry support for university programs is helping to educate engineers to design, implement, and…

  9. OPTIMIZING THE DESIGN OF THE SYSTEMS OF INFORMATION PROTECTION IN AUTOMATED INFORMATIONAL SYSTEMS OF INDUSTRIAL ENTERPRISES

    Directory of Open Access Journals (Sweden)

    I. E. L'vovich

    2014-01-01

    Full Text Available Summary. Now to increase of indicators of efficiency and operability of difficult systems apply an automation equipment. The increasing role of information which became universal goods for relationship between various structures is noted. The question of its protection becomes the most actual. Special value is allocated for optimum design at creation of systems of the protection, allowing with the greatest probability to choose the best decisions on a set of alternatives. Now it becomes actual for the majority of the industrial enterprises as correctly designed and introduced system of protection will be pledge of successful functioning and competitiveness of all organization. Stages of works on creation of an information security system of the industrial enterprise are presented. The attention is focused on one of the most important approaches to realization of optimum design – multialternative optimization. In article the structure of creation of system of protection from the point of view of various models is considered, each of which gives an idea of features of design of system as a whole. The special attention is paid to a problem of creation of an information security system as it has the most difficult structure. Tasks for processes of automation of each of design stages of system of information security of the industrial enterprises are designated. Idea of each of stages of works is given at design of system of protection that allows to understand in the best way internal structure of creation of system of protection. Therefore, it is given the chance of evident submission of necessary requirements to creation of a reliable complex of information security of the industrial enterprise. Thereby it is given the chance of leveling of risks at early design stages of systems of protection, the organization and definition of necessary types of hardware-software complexes for future system.

  10. A sensor-based automation system for handling nuclear materials

    International Nuclear Information System (INIS)

    Drotning, W.; Kimberly, H.; Wapman, W.; Darras, D.

    1997-01-01

    An automated system is being developed for handling large payloads of radioactive nuclear materials in an analytical laboratory. The automation system performs unpacking and repacking of payloads from shipping and storage containers, and delivery of the payloads to the stations in the laboratory. The system uses machine vision and force/torque sensing to provide sensor-based control of the automation system in order to enhance system safety, flexibility, and robustness, and achieve easy remote operation. The automation system also controls the operation of the laboratory measurement systems and the coordination of them with the robotic system. Particular attention has been given to system design features and analytical methods that provide an enhanced level of operational safety. Independent mechanical gripper interlock and tool release mechanisms were designed to prevent payload mishandling. An extensive Failure Modes and Effects Analysis of the automation system was developed as a safety design analysis tool

  11. Bar-code automated waste tracking system

    International Nuclear Information System (INIS)

    Hull, T.E.

    1994-10-01

    The Bar-Code Automated Waste Tracking System was designed to be a site-Specific program with a general purpose application for transportability to other facilities. The system is user-friendly, totally automated, and incorporates the use of a drive-up window that is close to the areas dealing in container preparation, delivery, pickup, and disposal. The system features ''stop-and-go'' operation rather than a long, tedious, error-prone manual entry. The system is designed for automation but allows operators to concentrate on proper handling of waste while maintaining manual entry of data as a backup. A large wall plaque filled with bar-code labels is used to input specific details about any movement of waste

  12. AP600 level of automation: United States utility perspective

    International Nuclear Information System (INIS)

    Bekkerman, A.Y.

    1997-01-01

    Design of the AP600 advanced nuclear plant man-machine interface system (M-MIS) is guided by the applicable requirements from the Utility Requirements Document (URD). However, the URD has left certain aspects of the M-MIS to be determined by the designer working together with utilities sponsoring the work. This is particularly true in the case of the level of automation to be designed into the M-MIS. Based on experience from currently operating plants, utilities have specified the identity and roles of personnel in the control room, which has led to establishing a number of level of automation issues for the AP600. The key role of automated computerized procedures in the AP600 automation has been determined and resolved. 5 refs

  13. Powder handling for automated fuel processing

    International Nuclear Information System (INIS)

    Frederickson, J.R.; Eschenbaum, R.C.; Goldmann, L.H.

    1989-01-01

    Installation of the Secure Automated Fabrication (SAF) line has been completed. It is located in the Fuel Cycle Plant (FCP) at the Department of Energy's (DOE) Hanford site near Richland, Washington. The SAF line was designed to fabricate advanced reactor fuel pellets and assemble fuel pins by automated, remote operation. This paper describes powder handling equipment and techniques utilized for automated powder processing and powder conditioning systems in this line. 9 figs

  14. Test Results for the Automated Rendezvous and Capture System

    Science.gov (United States)

    Cruzen, Craig; Dabney, Richard; Lomas, James

    1999-01-01

    The Automated Rendezvous and Capture (AR&C) system was designed and tested at NASA's Marshall Space Flight Center (MSFC) to demonstrate technologies and mission strategies for automated rendezvous and docking of spacecraft in Earth orbit, The system incorporates some of the latest innovations in Global Positioning, System space navigation, laser sensor technologies and automated mission sequencing algorithms. The system's initial design and integration was completed in 1998 and has undergone testing at MSFC. This paper describes the major components of the AR&C system and presents results from the official system tests performed in MSFC's Flight Robotics Laboratory with digital simulations and hardware in the loop tests. The results show that the AR&C system can safely and reliably perform automated rendezvous and docking missions in the absence of system failures with 100 percent success. When system failures are included, the system uses its automated collision avoidance maneuver logic to recover in a safe manner. The primary objective of the AR&C project is to prove that by designing a safe and robust automated system, mission operations cost can be reduced by decreasing the personnel required for mission design, preflight planning and training required for crewed rendezvous and docking missions.

  15. Low-Power Differential SRAM design for SOC Based on the 25-um Technology

    Science.gov (United States)

    Godugunuri, Sivaprasad; Dara, Naveen; Sambasiva Nayak, R.; Nayeemuddin, Md; Singh, Yadu, Dr.; Veda, R. N. S. Sunil

    2017-08-01

    In recent, the SOC styles area unit the vast complicated styles in VLSI these SOC styles having important low-power operations problems, to comprehend this we tend to enforced low-power SRAM. However these SRAM Architectures critically affects the entire power of SOC and competitive space. To beat the higher than disadvantages, during this paper, a low-power differential SRAM design is planned. The differential SRAM design stores multiple bits within the same cell, operates at minimum in operation low-tension and space per bit. The differential SRAM design designed supported the 25-um technology using Tanner-EDA Tool.

  16. Automated magnetic divertor design for optimal power exhaust

    Energy Technology Data Exchange (ETDEWEB)

    Blommaert, Maarten

    2017-07-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation

  17. Automated magnetic divertor design for optimal power exhaust

    International Nuclear Information System (INIS)

    Blommaert, Maarten

    2017-01-01

    The so-called divertor is the standard particle and power exhaust system of nuclear fusion tokamaks. In essence, the magnetic configuration hereby 'diverts' the plasma to a specific divertor structure. The design of this divertor is still a key issue to be resolved to evolve from experimental fusion tokamaks to commercial power plants. The focus of this dissertation is on one particular design requirement: avoiding excessive heat loads on the divertor structure. The divertor design process is assisted by plasma edge transport codes that simulate the plasma and neutral particle transport in the edge of the reactor. These codes are computationally extremely demanding, not in the least due to the complex collisional processes between plasma and neutrals that lead to strong radiation sinks and macroscopic heat convection near the vessel walls. One way of improving the heat exhaust is by modifying the magnetic confinement that governs the plasma flow. In this dissertation, automated design of the magnetic configuration is pursued using adjoint based optimization methods. A simple and fast perturbation model is used to compute the magnetic field in the vacuum vessel. A stable optimal design method of the nested type is then elaborated that strictly accounts for several nonlinear design constraints and code limitations. Using appropriate cost function definitions, the heat is spread more uniformly over the high-heat load plasma-facing components in a practical design example. Furthermore, practical in-parts adjoint sensitivity calculations are presented that provide a way to an efficient optimization procedure. Results are elaborated for a fictituous JET (Joint European Torus) case. The heat load is strongly reduced by exploiting an expansion of the magnetic flux towards the solid divertor structure. Subsequently, shortcomings of the perturbation model for magnetic field calculations are discussed in comparison to a free boundary equilibrium (FBE) simulation. These flaws

  18. Large - scale Rectangular Ruler Automated Verification Device

    Science.gov (United States)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  19. A VLSI Implementation of Four-Phase Lift Controller Using Verilog HDL

    Science.gov (United States)

    Kumar, Manish; Singh, Priyanka; Singh, Shesha

    2017-08-01

    With the advent of an era of staggering range of new technologies to provide ease of mobility and transportation elevators have become an essential component of all high rise buildings. An elevator is a type of vertical transportation that moves people between the floors of a high rise building. A four-Phase lift controller modeled on Verilog HDL code using Finite State Machine (FSM) has been presented in this paper. Verilog HDL helps in automated analysis and simulation of lift controller circuit. This design is based on synchronous input that operates on a fixed frequency. The Lift motion is controlled by means of accepting the destination floor level as input and generate control signal as output. In the proposed design a Verilog RTL code is developed and verified. Project Navigator of XILINX has been used as a code writing platform and results were simulated using Modelsim 5.4a simulator. This paper discusses the overall evolution of design and also discusses simulated results.

  20. Identifying Requirements for Effective Human-Automation Teamwork

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey C. Joe; John O' Hara; Heather D. Medema; Johanna H. Oxstrand

    2014-06-01

    Previous studies have shown that poorly designed human-automation collaboration, such as poorly designed communication protocols, often leads to problems for the human operators, such as: lack of vigilance, complacency, and loss of skills. These problems often lead to suboptimal system performance. To address this situation, a considerable amount of research has been conducted to improve human-automation collaboration and to make automation function better as a “team player.” Much of this research is based on an understanding of what it means to be a good team player from the perspective of a human team. However, the research is often based on a simplified view of human teams and teamwork. In this study, we sought to better understand the capabilities and limitations of automation from the standpoint of human teams. We first examined human teams to identify the principles for effective teamwork. We next reviewed the research on integrating automation agents and human agents into mixed agent teams to identify the limitations of automation agents to conform to teamwork principles. This research resulted in insights that can lead to more effective human-automation collaboration by enabling a more realistic set of requirements to be developed based on the strengths and limitations of all agents.

  1. Design and Development of a Robot-Based Automation System for Cryogenic Crystal Sample Mounting at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shu, D.; Preissner, C.; Nocher, D.; Han, Y.; Barraza, J.; Lee, P.; Lee, W.-K.; Cai, Z.; Ginell, S.; Alkire, R.; Lazarski, K.; Schuessler, R.; Joachimiak, A.

    2004-01-01

    X-ray crystallography is the primary method to determine the 3D structures of complex macromolecules at high resolution. In the years to come, the Advanced Photon Source (APS) and similar 3rd-generation synchrotron sources elsewhere will become the most powerful tools for studying atomic structures of biological molecules. One of the major bottlenecks in the x-ray data collection process is the constant need to change and realign the crystal sample. This is a very time- and manpower-consuming task. An automated sample mounting system will help to solve this bottleneck problem. We have developed a novel robot-based automation system for cryogenic crystal sample mounting at the APS. Design of the robot-based automation system, as well as its on-line test results at the Argonne Structural Biology Center (SBC) 19-BM experimental station, are presented in this paper

  2. User interface design principles for the SSM/PMAD automated power system

    International Nuclear Information System (INIS)

    Jakstas, L.M.; Myers, C.J.

    1991-01-01

    Computer-human interfaces are an integral part of developing software for spacecraft power systems. A well designed and efficient user interface enables an engineer to effectively operate the system, while it concurrently prevents the user from entering data which is beyond boundary conditions or performing operations which are out of context. A user interface should also be designed to ensure that the engineer easily obtains all useful and critical data for operating the system and is aware of all faults and states in the system. Martin Marietta, under contract to NASA George C. Marshall Space Flight Center, has developed a user interface for the Space Station Module Power Management and Distribution (SSM/PMAD) automated power system testbed which provides human access to the functionality of the power system, as well as exemplifying current techniques in user interface design. The testbed user interface was designed to enable an engineer to operate the system easily without having significant knowledge of computer systems, as well as provide an environment in which the engineer can monitor and interact with the SSM/PMAD system hardware. The design of the interface supports a global view of the most important data form the various hardware and software components, as well as enabling the user to obtain additional or more detailed data when needed. The components and representations of the SSM/PMAD testbed user interface are examined in this paper. An engineer's interactions with the system are also described

  3. Automated Design of Noise-Minimal, Safe Rotorcraft Trajectories

    Science.gov (United States)

    Morris, Robert A.; Venable, K. Brent; Lindsay, James

    2012-01-01

    NASA and the international community are investing in the development of a commercial transportation infrastructure that includes the increased use of rotorcraft, specifically helicopters and aircraft such as a 40-passenger civil tilt rotors. Rotorcraft have a number of advantages over fixed wing aircraft, primarily in not requiring direct access to the primary fixed wing runways. As such they can operate at an airport without directly interfering with major air carrier and commuter aircraft operations. However, there is significant concern over the impact of noise on the communities surrounding the transportation facilities. In this paper we propose to address the rotorcraft noise problem by exploiting powerful search techniques coming from artificial intelligence, coupled with simulation and field tests, to design trajectories that are expected to improve on the amount of ground noise generated. This paper investigates the use of simulation based on predictive physical models to facilitate the search for low-noise trajectories using a class of automated search algorithms called local search. A novel feature of this approach is the ability to incorporate constraints into the problem formulation that addresses passenger safety and comfort.

  4. The effects of advanced digital signal processing concepts on VLSIC/VHSIC design

    Science.gov (United States)

    Jankowski, C.

    Implementations of sophisticated mathematical techniques in advanced digital signal processors can significantly improve performance. Future VLSI and VHSI circuit designs must include the practical realization of these algorithms. A structured design approach is described and illustrated with examples from a RNS FIR filter processor development project. The CAE hardware and software required to support tasks of this complexity are also discussed. An EWS is recommended for controlling essential functions such as logic optimization, simulation and verification. The total IC design system is illustrated with the implementation of a new high performance algorithm for computing complex magnitude.

  5. Bioprocessing automation in cell therapy manufacturing: Outcomes of special interest group automation workshop.

    Science.gov (United States)

    Ball, Oliver; Robinson, Sarah; Bure, Kim; Brindley, David A; Mccall, David

    2018-04-01

    Phacilitate held a Special Interest Group workshop event in Edinburgh, UK, in May 2017. The event brought together leading stakeholders in the cell therapy bioprocessing field to identify present and future challenges and propose potential solutions to automation in cell therapy bioprocessing. Here, we review and summarize discussions from the event. Deep biological understanding of a product, its mechanism of action and indication pathogenesis underpin many factors relating to bioprocessing and automation. To fully exploit the opportunities of bioprocess automation, therapeutics developers must closely consider whether an automation strategy is applicable, how to design an 'automatable' bioprocess and how to implement process modifications with minimal disruption. Major decisions around bioprocess automation strategy should involve all relevant stakeholders; communication between technical and business strategy decision-makers is of particular importance. Developers should leverage automation to implement in-process testing, in turn applicable to process optimization, quality assurance (QA)/ quality control (QC), batch failure control, adaptive manufacturing and regulatory demands, but a lack of precedent and technical opportunities can complicate such efforts. Sparse standardization across product characterization, hardware components and software platforms is perceived to complicate efforts to implement automation. The use of advanced algorithmic approaches such as machine learning may have application to bioprocess and supply chain optimization. Automation can substantially de-risk the wider supply chain, including tracking and traceability, cryopreservation and thawing and logistics. The regulatory implications of automation are currently unclear because few hardware options exist and novel solutions require case-by-case validation, but automation can present attractive regulatory incentives. Copyright © 2018 International Society for Cellular Therapy

  6. Automations influence on nuclear power plants: a look at three accidents and how automation played a role.

    Science.gov (United States)

    Schmitt, Kara

    2012-01-01

    Nuclear power is one of the ways that we can design an efficient sustainable future. Automation is the primary system used to assist operators in the task of monitoring and controlling nuclear power plants (NPP). Automation performs tasks such as assessing the status of the plant's operations as well as making real time life critical situational specific decisions. While the advantages and disadvantages of automation are well studied in variety of domains, accidents remind us that there is still vulnerability to unknown variables. This paper will look at the effects of automation within three NPP accidents and incidents and will consider why automation failed in preventing these accidents from occurring. It will also review the accidents at the Three Mile Island, Chernobyl, and Fukushima Daiichi NPP's in order to determine where better use of automation could have resulted in a more desirable outcome.

  7. Designing of smart home automation system based on Raspberry Pi

    Science.gov (United States)

    Saini, Ravi Prakash; Singh, Bhanu Pratap; Sharma, Mahesh Kumar; Wattanawisuth, Nattapol; Leeprechanon, Nopbhorn

    2016-03-01

    Locally networked or remotely controlled home automation system becomes a popular paradigm because of the numerous advantages and is suitable for academic research. This paper proposes a method for an implementation of Raspberry Pi based home automation system presented with an android phone access interface. The power consumption profile across the connected load is measured accurately through programming. Users can access the graph of total power consumption with respect to time worldwide using their Dropbox account. An android application has been developed to channelize the monitoring and controlling operation of home appliances remotely. This application facilitates controlling of operating pins of Raspberry Pi by pressing the corresponding key for turning "on" and "off" of any desired appliance. Systems can range from the simple room lighting control to smart microcontroller based hybrid systems incorporating several other additional features. Smart home automation systems are being adopted to achieve flexibility, scalability, security in the sense of data protection through the cloud-based data storage protocol, reliability, energy efficiency, etc.

  8. Controls and automation in the SPIRAL project

    International Nuclear Information System (INIS)

    Bothner, U.; Boulot, A.; Maherault, J.; Martial, L.

    1999-01-01

    The control and automation team of the R and D of Accelerator-Exotic Beam Department has had in the framework of SPIRAL collaboration the following tasks: 1. automation of the resonator high frequency equipment of the CIME cyclotron; 2. automation of the vacuum equipment, i.e. the low energy line (TBE), the CIME cyclotron, the low energy line (BE); 3. automation of load safety for power supply; 4. for each of these tasks a circuitry file based on the SCHEMA software has been worked out. The programs required in the automation of load safety for power supply (STEP5, PROTOOL, DESIGNER 4.1) were developed and implemented for PC

  9. 49 CFR 238.445 - Automated monitoring.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Automated monitoring. 238.445 Section 238.445... Equipment § 238.445 Automated monitoring. (a) Each passenger train shall be equipped to monitor the... limiting the speed of the train. (c) The monitoring system shall be designed with an automatic self-test...

  10. DESIGN AND IMPLEMENTATION OF A VHDL PROCESSOR FOR DCT BASED IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    Md. Shabiul Islam

    2017-11-01

    Full Text Available This paper describes the design and implementation of a VHDL processor meant for performing 2D-Discrete Cosine Transform (DCT to use in image compression applications. The design flow starts from the system specification to implementation on silicon and the entire process is carried out using an advanced workstation based design environment for digital signal processing. The software allows the bit-true analysis to ensure that the designed VLSI processor satisfies the required specifications. The bit-true analysis is performed on all levels of abstraction (behavior, VHDL etc.. The motivation behind the work is smaller size chip area, faster processing, reducing the cost of the chip

  11. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Directory of Open Access Journals (Sweden)

    Alistair McEwan

    2003-06-01

    Full Text Available The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a rate level functions for onset and steady-state response, (b recovery after masking, (c additivity, (d two-component adaptation, (e phase locking, (f recovery of spontaneous activity, and (g computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  12. Development and Evaluation of a Measure of Library Automation.

    Science.gov (United States)

    Pungitore, Verna L.

    1986-01-01

    Construct validity and reliability estimates indicate that study designed to measure utilization of automation in public and academic libraries was successful in tentatively identifying and measuring three subdimensions of level of automation: quality of hardware, method of software development, and number of automation specialists. Questionnaire…

  13. 10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2

    Science.gov (United States)

    Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.

    1985-02-01

    An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.

  14. An automated digital imaging system for environmental monitoring applications

    Science.gov (United States)

    Bogle, Rian; Velasco, Miguel; Vogel, John

    2013-01-01

    Recent improvements in the affordability and availability of high-resolution digital cameras, data loggers, embedded computers, and radio/cellular modems have advanced the development of sophisticated automated systems for remote imaging. Researchers have successfully placed and operated automated digital cameras in remote locations and in extremes of temperature and humidity, ranging from the islands of the South Pacific to the Mojave Desert and the Grand Canyon. With the integration of environmental sensors, these automated systems are able to respond to local conditions and modify their imaging regimes as needed. In this report we describe in detail the design of one type of automated imaging system developed by our group. It is easily replicated, low-cost, highly robust, and is a stand-alone automated camera designed to be placed in remote locations, without wireless connectivity.

  15. Automated control system for the Temelin nuclear power plant

    International Nuclear Information System (INIS)

    Labik, V.

    1990-01-01

    Instrumentation of the automated control system of the Temelin nuclear power plant in the section of the main production unit and of the major auxiliary equipment is described, the results of testing are reported, and the present status of design activities is assessed. The suitability of application of Czechoslovak automation facilities to the instrumentation of the automated control system of the power plant was confirmed by the Soviet designer and supplier based on favorable results of polygonal testing. Capacity problems in the development of the designs and user software are alleviated by extensive cooperation. It is envisaged that all tasks will be fulfilled as planned. (P.A.). 1 fig., 5 refs

  16. Ontology-Based Device Descriptions and Device Repository for Building Automation Devices

    Directory of Open Access Journals (Sweden)

    Dibowski Henrik

    2011-01-01

    Full Text Available Device descriptions play an important role in the design and commissioning of modern building automation systems and help reducing the design time and costs. However, all established device descriptions are specialized for certain purposes and suffer from several weaknesses. This hinders a further design automation, which is strongly needed for the more and more complex building automation systems. To overcome these problems, this paper presents novel Ontology-based Device Descriptions (ODDs along with a layered ontology architecture, a specific ontology view approach with virtual properties, a generic access interface, a triple store-based database backend, and a generic search mask GUI with underlying query generation algorithm. It enables a formal, unified, and extensible specification of building automation devices, ensures their comparability, and facilitates a computer-enabled retrieval, selection, and interoperability evaluation, which is essential for an automated design. The scalability of the approach to several ten thousand devices is demonstrated.

  17. Design of a novel automated methanol feed system for pilot-scale fermentation of Pichia pastoris.

    Science.gov (United States)

    Hamaker, Kent H; Johnson, Daniel C; Bellucci, Joseph J; Apgar, Kristie R; Soslow, Sherry; Gercke, John C; Menzo, Darrin J; Ton, Christopher

    2011-01-01

    Large-scale fermentation of Pichia pastoris requires a large volume of methanol feed during the induction phase. However, a large volume of methanol feed is difficult to use in the processing suite because of the inconvenience of constant monitoring, manual manipulation steps, and fire and explosion hazards. To optimize and improve safety of the methanol feed process, a novel automated methanol feed system has been designed and implemented for industrial fermentation of P. pastoris. Details of the design of the methanol feed system are described. The main goals of the design were to automate the methanol feed process and to minimize the hazardous risks associated with storing and handling large quantities of methanol in the processing area. The methanol feed system is composed of two main components: a bulk feed (BF) system and up to three portable process feed (PF) systems. The BF system automatically delivers methanol from a central location to the portable PF system. The PF system provides precise flow control of linear, step, or exponential feed of methanol to the fermenter. Pilot-scale fermentations with linear and exponential methanol feeds were conducted using two Mut(+) (methanol utilization plus) strains, one expressing a recombinant therapeutic protein and the other a monoclonal antibody. Results show that the methanol feed system is accurate, safe, and efficient. The feed rates for both linear and exponential feed methods were within ± 5% of the set points, and the total amount of methanol fed was within 1% of the targeted volume. Copyright © 2011 American Institute of Chemical Engineers (AIChE).

  18. The Influence of Cultural Factors on Trust in Automation

    Science.gov (United States)

    Chien, Shih-Yi James

    2016-01-01

    Human interaction with automation is a complex process that requires both skilled operators and complex system designs to effectively enhance overall performance. Although automation has successfully managed complex systems throughout the world for over half a century, inappropriate reliance on automation can still occur, such as the recent…

  19. Aviation safety and automation technology for subsonic transports

    Science.gov (United States)

    Albers, James A.

    1991-01-01

    Discussed here are aviation safety human factors and air traffic control (ATC) automation research conducted at the NASA Ames Research Center. Research results are given in the areas of flight deck and ATC automations, displays and warning systems, crew coordination, and crew fatigue and jet lag. Accident investigation and an incident reporting system that is used to guide the human factors research is discussed. A design philosophy for human-centered automation is given, along with an evaluation of automation on advanced technology transports. Intelligent error tolerant systems such as electronic checklists are discussed along with design guidelines for reducing procedure errors. The data on evaluation of Crew Resource Management (CRM) training indicates highly significant positive changes in appropriate flight deck behavior and more effective use of available resources for crew members receiving the training.

  20. Operating procedure automation to enhance safety of nuclear power plants

    International Nuclear Information System (INIS)

    Husseiny, A.A.; Sabri, Z.A.; Adams, S.K.; Rodriguez, R.J.; Packer, D.; Holmes, J.W.

    1989-01-01

    Use of logic statements and computer assist are explored as means for automation and improvement on design of operating procedures including those employed in abnormal and emergency situations. Operating procedures for downpower and loss of forced circulation are used for demonstration. Human-factors analysis is performed on generic emergency operating procedures for three strategies of control; manual, semi-automatic and automatic, using standard emergency operating procedures. Such preliminary analysis shows that automation of procedures is feasible provided that fault-tolerant software and hardware become available for design of the controllers. Recommendations are provided for tests to substantiate the promise of enhancement of plant safety. Adequate design of operating procedures through automation may alleviate several major operational problems of nuclear power plants. Also, automation of procedures is necessary for partial or overall automatic control of plants. Fully automatic operations are needed for space applications while supervised automation of land-based and offshore plants may become the thrust of new generation of nulcear power plants. (orig.)

  1. Designing of smart home automation system based on Raspberry Pi

    International Nuclear Information System (INIS)

    Saini, Ravi Prakash; Singh, Bhanu Pratap; Sharma, Mahesh Kumar; Wattanawisuth, Nattapol; Leeprechanon, Nopbhorn

    2016-01-01

    Locally networked or remotely controlled home automation system becomes a popular paradigm because of the numerous advantages and is suitable for academic research. This paper proposes a method for an implementation of Raspberry Pi based home automation system presented with an android phone access interface. The power consumption profile across the connected load is measured accurately through programming. Users can access the graph of total power consumption with respect to time worldwide using their Dropbox account. An android application has been developed to channelize the monitoring and controlling operation of home appliances remotely. This application facilitates controlling of operating pins of Raspberry Pi by pressing the corresponding key for turning “on” and “off” of any desired appliance. Systems can range from the simple room lighting control to smart microcontroller based hybrid systems incorporating several other additional features. Smart home automation systems are being adopted to achieve flexibility, scalability, security in the sense of data protection through the cloud-based data storage protocol, reliability, energy efficiency, etc.

  2. Designing of smart home automation system based on Raspberry Pi

    Energy Technology Data Exchange (ETDEWEB)

    Saini, Ravi Prakash; Singh, Bhanu Pratap [B K Birla Institute of Engineering & Technology, Pilani, Rajasthan (India); Sharma, Mahesh Kumar; Wattanawisuth, Nattapol; Leeprechanon, Nopbhorn, E-mail: Dr.N.L@ieee.org [Thammasat University, Rangsit Campus, Pathum Thani (Thailand)

    2016-03-09

    Locally networked or remotely controlled home automation system becomes a popular paradigm because of the numerous advantages and is suitable for academic research. This paper proposes a method for an implementation of Raspberry Pi based home automation system presented with an android phone access interface. The power consumption profile across the connected load is measured accurately through programming. Users can access the graph of total power consumption with respect to time worldwide using their Dropbox account. An android application has been developed to channelize the monitoring and controlling operation of home appliances remotely. This application facilitates controlling of operating pins of Raspberry Pi by pressing the corresponding key for turning “on” and “off” of any desired appliance. Systems can range from the simple room lighting control to smart microcontroller based hybrid systems incorporating several other additional features. Smart home automation systems are being adopted to achieve flexibility, scalability, security in the sense of data protection through the cloud-based data storage protocol, reliability, energy efficiency, etc.

  3. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  4. Manned spacecraft automation and robotics

    Science.gov (United States)

    Erickson, Jon D.

    1987-01-01

    The Space Station holds promise of being a showcase user and driver of advanced automation and robotics technology. The author addresses the advances in automation and robotics from the Space Shuttle - with its high-reliability redundancy management and fault tolerance design and its remote manipulator system - to the projected knowledge-based systems for monitoring, control, fault diagnosis, planning, and scheduling, and the telerobotic systems of the future Space Station.

  5. The Automation-by-Expertise-by-Training Interaction.

    Science.gov (United States)

    Strauch, Barry

    2017-03-01

    I introduce the automation-by-expertise-by-training interaction in automated systems and discuss its influence on operator performance. Transportation accidents that, across a 30-year interval demonstrated identical automation-related operator errors, suggest a need to reexamine traditional views of automation. I review accident investigation reports, regulator studies, and literature on human computer interaction, expertise, and training and discuss how failing to attend to the interaction of automation, expertise level, and training has enabled operators to commit identical automation-related errors. Automated systems continue to provide capabilities exceeding operators' need for effective system operation and provide interfaces that can hinder, rather than enhance, operator automation-related situation awareness. Because of limitations in time and resources, training programs do not provide operators the expertise needed to effectively operate these automated systems, requiring them to obtain the expertise ad hoc during system operations. As a result, many do not acquire necessary automation-related system expertise. Integrating automation with expected operator expertise levels, and within training programs that provide operators the necessary automation expertise, can reduce opportunities for automation-related operator errors. Research to address the automation-by-expertise-by-training interaction is needed. However, such research must meet challenges inherent to examining realistic sociotechnical system automation features with representative samples of operators, perhaps by using observational and ethnographic research. Research in this domain should improve the integration of design and training and, it is hoped, enhance operator performance.

  6. Pilot opinions on high level flight deck automation issues: Toward the development of a design philosophy

    Science.gov (United States)

    Tenney, Yvette J.; Rogers, William H.; Pew, Richard W.

    1995-01-01

    There has been much concern in recent years about the rapid increase in automation on commercial flight decks. The survey was composed of three major sections. The first section asked pilots to rate different automation components that exist on the latest commercial aircraft regarding their obtrusiveness and the attention and effort required in using them. The second section addressed general 'automation philosophy' issues. The third section focused on issues related to levels and amount of automation. The results indicate that pilots of advanced aircraft like their automation, use it, and would welcome more automation. However, they also believe that automation has many disadvantages, especially fully autonomous automation. They want their automation to be simple and reliable and to produce predictable results. The biggest needs for higher levels of automation were in pre-flight, communication, systems management, and task management functions, planning as well as response tasks, and high workload situations. There is an irony and a challenge in the implications of these findings. On the one hand pilots would like new automation to be simple and reliable, but they need it to support the most complex part of the job--managing and planning tasks in high workload situations.

  7. An Extended Case Study Methoology for Investigating Influence of Cultural, Organizational, and Automation Factors on Human-Automation Trust

    Science.gov (United States)

    Koltai, Kolina Sun; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Johnson, Walter; Cacanindin, Artemio

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Forces newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the cases politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerabilityhigh risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  8. Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing

    Science.gov (United States)

    Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.

    2010-01-01

    The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development

  9. Automated contamination monitoring for hot particles

    International Nuclear Information System (INIS)

    Johnstone, G.; Case, L.

    1987-01-01

    INS Corp., the largest nuclear laundry company in the United States, has recently developed two types of automated contamination monitoring systems: 1) the Automated Laundry Monitor (ALM), which provides quality assurance monitoring for protective clothing contamination and 2) a low-level automated monitoring system for Plastic Volume Reduction Service (PVRS). The presentation details the inaccuracies associated with hand-probe frisking which led to the development of the ALM. The ALM was designed for 100% quality assurance monitoring of garments to the most stringent customer requirements. A review of why the ALM is essential in verifying the absence of hot particles on garments is given. The final topic addresses the expansion of the ALM technology in support of the INS Plastic Volume Reduction Service by monitoring decontaminated plastics to free release levels. This presentation reviews the design and operation of both monitoring systems

  10. Integrated circuit design using design automation

    International Nuclear Information System (INIS)

    Gwyn, C.W.

    1976-09-01

    Although the use of computer aids to develop integrated circuits is relatively new at Sandia, the program has been very successful. The results have verified the utility of the in-house CAD design capability. Custom IC's have been developed in much shorter times than available through semiconductor device manufacturers. In addition, security problems were minimized and a saving was realized in circuit cost. The custom CMOS IC's were designed at less than half the cost of designing with conventional techniques. In addition to the computer aided design, the prototype fabrication and testing capability provided by the semiconductor development laboratory and microelectronics computer network allows the circuits to be fabricated and evaluated before the designs are transferred to the commercial semiconductor manufacturers for production. The Sandia design and prototype fabrication facilities provide the capability of complete custom integrated circuit development entirely within the ERDA laboratories

  11. QuantifyMe: An Open-Source Automated Single-Case Experimental Design Platform

    Directory of Open Access Journals (Sweden)

    Sara Taylor

    2018-04-01

    Full Text Available Smartphones and wearable sensors have enabled unprecedented data collection, with many products now providing feedback to users about recommended step counts or sleep durations. However, these recommendations do not provide personalized insights that have been shown to be best suited for a specific individual. A scientific way to find individualized recommendations and causal links is to conduct experiments using single-case experimental design; however, properly designed single-case experiments are not easy to conduct on oneself. We designed, developed, and evaluated a novel platform, QuantifyMe, for novice self-experimenters to conduct proper-methodology single-case self-experiments in an automated and scientific manner using their smartphones. We provide software for the platform that we used (available for free on GitHub, which provides the methodological elements to run many kinds of customized studies. In this work, we evaluate its use with four different kinds of personalized investigations, examining how variables such as sleep duration and regularity, activity, and leisure time affect personal happiness, stress, productivity, and sleep efficiency. We conducted a six-week pilot study (N = 13 to evaluate QuantifyMe. We describe the lessons learned developing the platform and recommendations for its improvement, as well as its potential for enabling personalized insights to be scientifically evaluated in many individuals, reducing the high administrative cost for advancing human health and wellbeing.

  12. QuantifyMe: An Open-Source Automated Single-Case Experimental Design Platform.

    Science.gov (United States)

    Taylor, Sara; Sano, Akane; Ferguson, Craig; Mohan, Akshay; Picard, Rosalind W

    2018-04-05

    Smartphones and wearable sensors have enabled unprecedented data collection, with many products now providing feedback to users about recommended step counts or sleep durations. However, these recommendations do not provide personalized insights that have been shown to be best suited for a specific individual. A scientific way to find individualized recommendations and causal links is to conduct experiments using single-case experimental design; however, properly designed single-case experiments are not easy to conduct on oneself. We designed, developed, and evaluated a novel platform, QuantifyMe, for novice self-experimenters to conduct proper-methodology single-case self-experiments in an automated and scientific manner using their smartphones. We provide software for the platform that we used (available for free on GitHub), which provides the methodological elements to run many kinds of customized studies. In this work, we evaluate its use with four different kinds of personalized investigations, examining how variables such as sleep duration and regularity, activity, and leisure time affect personal happiness, stress, productivity, and sleep efficiency. We conducted a six-week pilot study ( N = 13) to evaluate QuantifyMe. We describe the lessons learned developing the platform and recommendations for its improvement, as well as its potential for enabling personalized insights to be scientifically evaluated in many individuals, reducing the high administrative cost for advancing human health and wellbeing.

  13. Towards a characterization of information automation systems on the flight deck

    Science.gov (United States)

    Dudley, Rachel Feddersen

    This thesis summarizes research to investigate the characteristics that define information automation systems used on aircraft flight decks and the significant impacts that these characteristics have on pilot performance. Major accomplishments of the work include the development of a set of characteristics that describe information automation systems on the flight deck and an experiment designed to study a subset of these characteristics. Information automation systems on the flight deck are responsible for the collection, processing, analysis, and presentation of data to the flightcrew. These systems pose human factors issues and challenges that must be considered by designers of these systems. Based on a previously developed formal definition of information automation for aircraft flight deck systems, an analysis process was developed and conducted to reach a refined set of information automation characteristics. In this work, characteristics are defined as a set of properties or attributes that describe an information automation system's operation or behavior, which can be used to identify and assess potential human factors issues. Hypotheses were formed for a subset of the characteristics: Automation Visibility, Information Quality, and Display Complexity. An experimental investigation was developed to measure performance impacts related to these characteristics, which showed mixed results of expected and surprising findings, with many interactions. A set of recommendations were then developed based on the experimental observations. Ensuring that the right information is presented to pilots at the right time and in the appropriate manner is the job of flight deck system designers. This work provides a foundation for developing recommendations and guidelines specific to information automation on the flight deck with the goal of improving the design and evaluation of information automation systems before they are implemented.

  14. Virtual Machine in Automation Projects

    OpenAIRE

    Xing, Xiaoyuan

    2010-01-01

    Virtual machine, as an engineering tool, has recently been introduced into automation projects in Tetra Pak Processing System AB. The goal of this paper is to examine how to better utilize virtual machine for the automation projects. This paper designs different project scenarios using virtual machine. It analyzes installability, performance and stability of virtual machine from the test results. Technical solutions concerning virtual machine are discussed such as the conversion with physical...

  15. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Directory of Open Access Journals (Sweden)

    Ying-Lun Chen

    2015-08-01

    Full Text Available A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO, and the feature extraction is carried out by the generalized Hebbian algorithm (GHA. To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  16. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  17. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  18. An Automation System for Optimizing a Supply Chain Network Design under the Influence of Demand Uncertainty

    OpenAIRE

    Polany, Rany

    2012-01-01

    This research develops and applies an integrated hierarchical framework for modeling a multi-echelon supply chain network design, under the influence of demand uncertainty. The framework is a layered integration of two levels: macro, high-level scenario planning combined with micro, low-level Monte Carlo simulation of uncertainties in demand. To facilitate rapid simulation of the effects of demand uncertainty, the integrated framework was implemented as a dashboard automation system using Mic...

  19. Agent-Oriented Embedded Control System Design and Development of a Vision-Based Automated Guided Vehicle

    Directory of Open Access Journals (Sweden)

    Wu Xing

    2012-07-01

    Full Text Available This paper presents a control system design and development approach for a vision-based automated guided vehicle (AGV based on the multi-agent system (MAS methodology and embedded system resources. A three-phase agent-oriented design methodology Prometheus is used to analyse system functions, construct operation scenarios, define agent types and design the MAS coordination mechanism. The control system is then developed in an embedded implementation containing a digital signal processor (DSP and an advanced RISC machine (ARM by using the multitasking processing capacity of multiple microprocessors and system services of a real-time operating system (RTOS. As a paradigm, an onboard embedded controller is designed and developed for the AGV with a camera detecting guiding landmarks, and the entire procedure has a high efficiency and a clear hierarchy. A vision guidance experiment for our AGV is carried out in a space-limited laboratory environment to verify the perception capacity and the onboard intelligence of the agent-oriented embedded control system.

  20. Optimizing the balance between task automation and human manual control in simulated submarine track management.

    Science.gov (United States)

    Chen, Stephanie I; Visser, Troy A W; Huf, Samuel; Loft, Shayne

    2017-09-01

    Automation can improve operator performance and reduce workload, but can also degrade operator situation awareness (SA) and the ability to regain manual control. In 3 experiments, we examined the extent to which automation could be designed to benefit performance while ensuring that individuals maintained SA and could regain manual control. Participants completed a simulated submarine track management task under varying task load. The automation was designed to facilitate information acquisition and analysis, but did not make task decisions. Relative to a condition with no automation, the continuous use of automation improved performance and reduced subjective workload, but degraded SA. Automation that was engaged and disengaged by participants as required (adaptable automation) moderately improved performance and reduced workload relative to no automation, but degraded SA. Automation engaged and disengaged based on task load (adaptive automation) provided no benefit to performance or workload, and degraded SA relative to no automation. Automation never led to significant return-to-manual deficits. However, all types of automation led to degraded performance on a nonautomated task that shared information processing requirements with automated tasks. Given these outcomes, further research is urgently required to establish how to design automation to maximize performance while keeping operators cognitively engaged. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Selecting automation for the clinical chemistry laboratory.

    Science.gov (United States)

    Melanson, Stacy E F; Lindeman, Neal I; Jarolim, Petr

    2007-07-01

    Laboratory automation proposes to improve the quality and efficiency of laboratory operations, and may provide a solution to the quality demands and staff shortages faced by today's clinical laboratories. Several vendors offer automation systems in the United States, with both subtle and obvious differences. Arriving at a decision to automate, and the ensuing evaluation of available products, can be time-consuming and challenging. Although considerable discussion concerning the decision to automate has been published, relatively little attention has been paid to the process of evaluating and selecting automation systems. To outline a process for evaluating and selecting automation systems as a reference for laboratories contemplating laboratory automation. Our Clinical Chemistry Laboratory staff recently evaluated all major laboratory automation systems in the United States, with their respective chemistry and immunochemistry analyzers. Our experience is described and organized according to the selection process, the important considerations in clinical chemistry automation, decisions and implementation, and we give conclusions pertaining to this experience. Including the formation of a committee, workflow analysis, submitting a request for proposal, site visits, and making a final decision, the process of selecting chemistry automation took approximately 14 months. We outline important considerations in automation design, preanalytical processing, analyzer selection, postanalytical storage, and data management. Selecting clinical chemistry laboratory automation is a complex, time-consuming process. Laboratories considering laboratory automation may benefit from the concise overview and narrative and tabular suggestions provided.

  2. Automation of Taxiing

    Directory of Open Access Journals (Sweden)

    Jaroslav Bursík

    2017-01-01

    Full Text Available The article focuses on the possibility of automation of taxiing, which is the part of a flight, which, under adverse weather conditions, greatly reduces the operational usability of an airport, and is the only part of a flight that has not been affected by automation, yet. Taxiing is currently handled manually by the pilot, who controls the airplane based on information from visual perception. The article primarily deals with possible ways of obtaining navigational information, and its automatic transfer to the controls. Analyzed wand assessed were currently available technologies such as computer vision, Light Detection and Ranging and Global Navigation Satellite System, which are useful for navigation and their general implementation into an airplane was designed. Obstacles to the implementation were identified, too. The result is a proposed combination of systems along with their installation into airplane’s systems so that it is possible to use the automated taxiing.

  3. Maneuver Automation Software

    Science.gov (United States)

    Uffelman, Hal; Goodson, Troy; Pellegrin, Michael; Stavert, Lynn; Burk, Thomas; Beach, David; Signorelli, Joel; Jones, Jeremy; Hahn, Yungsun; Attiyah, Ahlam; hide

    2009-01-01

    The Maneuver Automation Software (MAS) automates the process of generating commands for maneuvers to keep the spacecraft of the Cassini-Huygens mission on a predetermined prime mission trajectory. Before MAS became available, a team of approximately 10 members had to work about two weeks to design, test, and implement each maneuver in a process that involved running many maneuver-related application programs and then serially handing off data products to other parts of the team. MAS enables a three-member team to design, test, and implement a maneuver in about one-half hour after Navigation has process-tracking data. MAS accepts more than 60 parameters and 22 files as input directly from users. MAS consists of Practical Extraction and Reporting Language (PERL) scripts that link, sequence, and execute the maneuver- related application programs: "Pushing a single button" on a graphical user interface causes MAS to run navigation programs that design a maneuver; programs that create sequences of commands to execute the maneuver on the spacecraft; and a program that generates predictions about maneuver performance and generates reports and other files that enable users to quickly review and verify the maneuver design. MAS can also generate presentation materials, initiate electronic command request forms, and archive all data products for future reference.

  4. Prototype architecture for a VLSI level zero processing system. [Space Station Freedom

    Science.gov (United States)

    Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.

    1989-01-01

    The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.

  5. A Framework for Semi-Automated Implementation of Multidimensional Data Models

    Directory of Open Access Journals (Sweden)

    Ilona Mariana NAGY

    2012-08-01

    Full Text Available Data warehousing solution development represents a challenging task which requires the employment of considerable resources on behalf of enterprises and sustained commitment from the stakeholders. Costs derive mostly from the amount of time invested in the design and physical implementation of these large projects, time that we consider, may be decreased through the automation of several processes. Thus, we present a framework for semi-automated implementation of multidimensional data models and introduce an automation prototype intended to reduce the time of data structures generation in the warehousing environment. Our research is focused on the design of an automation component and the development of a corresponding prototype from technical metadata.

  6. Methodology, Algorithms, and Emerging Tool for Automated Design of Intelligent Integrated Multi-Sensor Systems

    Directory of Open Access Journals (Sweden)

    Andreas König

    2009-11-01

    Full Text Available The emergence of novel sensing elements, computing nodes, wireless communication and integration technology provides unprecedented possibilities for the design and application of intelligent systems. Each new application system must be designed from scratch, employing sophisticated methods ranging from conventional signal processing to computational intelligence. Currently, a significant part of this overall algorithmic chain of the computational system model still has to be assembled manually by experienced designers in a time and labor consuming process. In this research work, this challenge is picked up and a methodology and algorithms for automated design of intelligent integrated and resource-aware multi-sensor systems employing multi-objective evolutionary computation are introduced. The proposed methodology tackles the challenge of rapid-prototyping of such systems under realization constraints and, additionally, includes features of system instance specific self-correction for sustained operation of a large volume and in a dynamically changing environment. The extension of these concepts to the reconfigurable hardware platform renders so called self-x sensor systems, which stands, e.g., for self-monitoring, -calibrating, -trimming, and -repairing/-healing systems. Selected experimental results prove the applicability and effectiveness of our proposed methodology and emerging tool. By our approach, competitive results were achieved with regard to classification accuracy, flexibility, and design speed under additional design constraints.

  7. Automated sample analysis and remediation

    International Nuclear Information System (INIS)

    Hollen, R.; Settle, F.

    1995-01-01

    The Contaminant Analysis Automation Project is developing an automated chemical analysis system to address the current needs of the US Department of Energy (DOE). These needs focus on the remediation of large amounts of radioactive and chemically hazardous wastes stored, buried and still being processed at numerous DOE sites. This paper outlines the advantages of the system under development, and details the hardware and software design. A prototype system for characterizing polychlorinated biphenyls in soils is also described

  8. Automated sampling and data processing derived from biomimetic membranes

    DEFF Research Database (Denmark)

    Perry, Mark; Vissing, Thomas; Boesen, P.

    2009-01-01

    data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition...... applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet (TM)) for efficient data management......Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new...

  9. Automation of a thermal expansion instrument

    Energy Technology Data Exchange (ETDEWEB)

    Holland, L.L.

    1979-03-01

    Automation of a thermal expansion instrument using a minicomputer system and with analog-to-digital converter inputs and flip-flop relay outputs is described. The necessary hardware link and the software were developed to allow equipment control, data acquisition, data reduction, and report generation by the minicomputer. The design of the automation allows non-programmers to run the experiment, reduce the data, and generate the report.

  10. The Impact of Automation Reliability and Operator Fatigue on Performance and Reliance

    Science.gov (United States)

    2016-09-23

    Cummings et al., 2007). Automation designed to assist operators in overload situations may promote operator disengagement during periods of low...Calhoun et al., 2011). This testbed offers several tasks designed to emulate the cognitive demands that an operator managing multiple UAVs is likely...reliable (Cronbach’s α = 0.94) measure of affective and cognitive components of trust in automation. Items gauge confidence in an automation and

  11. Application of advanced technology to space automation

    Science.gov (United States)

    Schappell, R. T.; Polhemus, J. T.; Lowrie, J. W.; Hughes, C. A.; Stephens, J. R.; Chang, C. Y.

    1979-01-01

    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits.

  12. Automated 741 document preparation: Oak Ridge National Laboratory's Automated Safeguards Information System (OASIS)

    International Nuclear Information System (INIS)

    Austin, H.C.; Gray, L.M.

    1982-01-01

    OASIS has been providing for Oak Ridge National Laboratory's total safeguards needs since being place on line in April 1980. The system supports near real-time nuclear materials safeguards and accountability control. The original design of OASIS called for an automated facsimile of a 741 document to be prepared as a functional by-product of updating the inventory. An attempt was made to utilize, intact, DOE-Albuquerque's automated 741 system to generate the facsimile; however, the five page document produced proved too cumbersome. Albuquerque's programs were modified to print an original 741 document utilizing standard DOE/NRC 741 forms. It is felt that the best features of both the automated and manually generated 741 documents have been incorporated. Automation of the source data for 741 shipping documents produces greater efficiency while reducing possible errors. Through utilization of the standard DOE/NRC form, continuity within the NMMSS system is maintained, thus minimizing the confusion and redundancy associated with facsimiles. OASIS now fulfills the original concept of near real-time accountability by furnishing a viable 741 document as a function of updating the inventory

  13. Development of an Automated Technique for Failure Modes and Effect Analysis

    DEFF Research Database (Denmark)

    Blanke, M.; Borch, Ole; Allasia, G.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  14. Development of an automated technique for failure modes and effect analysis

    DEFF Research Database (Denmark)

    Blanke, Mogens; Borch, Ole; Bagnoli, F.

    1999-01-01

    Advances in automation have provided integration of monitoring and control functions to enhance the operator's overview and ability to take remedy actions when faults occur. Automation in plant supervision is technically possible with integrated automation systems as platforms, but new design...... methods are needed to cope efficiently with the complexity and to ensure that the functionality of a supervisor is correct and consistent. In particular these methods are expected to significantly improve fault tolerance of the designed systems. The purpose of this work is to develop a software module...... implementing an automated technique for Failure Modes and Effects Analysis (FMEA). This technique is based on the matrix formulation of FMEA for the investigation of failure propagation through a system. As main result, this technique will provide the design engineer with decision tables for fault handling...

  15. Automated ultrasonic inspection of nuclear plant components

    International Nuclear Information System (INIS)

    Baron, J.A.; Dolbey, M.P.

    1982-01-01

    For reasons of safety and efficiency, automated systems are used in performing ultrasonic inspection of nuclear components. An automated system designed specifically for the inspection of headers in a nuclear plant is described. In-service inspection results obtained with this system are shown to correlate with pre-service inspection results obtained by manual methods

  16. Automation and Job Satisfaction among Reference Librarians.

    Science.gov (United States)

    Whitlatch, Jo Bell

    1991-01-01

    Discussion of job satisfaction and the level of job performance focuses on the effect of automation on job satisfaction among reference librarians. The influence of stress is discussed, a job strain model is explained, and examples of how to design a job to reduce the stress caused by automation are given. (12 references) (LRW)

  17. Wireless Android Based Home Automation System

    Directory of Open Access Journals (Sweden)

    Muhammad Tanveer Riaz

    2017-01-01

    Full Text Available This manuscript presents a prototype and design implementation of an advance home automation system that uses Wi-Fi technology as a network infrastructure connecting its parts. The proposed system consists of two main components; the first part is the server, which presents system core that manages and controls user’s home. Users and system administrator can locally (Local Area Network or remotely (internet manage and control the system. Second part is the hardware interface module, which provides appropriate interface to sensors and actuator of home automation system. Unlike most of the available home automation system in the market, the proposed system is scalable that one server can manage many hardware interface modules as long as it exists within network coverage. System supports a wide range of home automation devices like appliances, power management components, and security components. The proposed system is better in terms of the flexibility and scalability than the commercially available home automation systems

  18. Automation and decision support in interactive consumer products.

    OpenAIRE

    Sauer, J.; Rüttinger, B.

    2007-01-01

    This article presents two empirical studies (n=30, n=48) that are concerned with different forms of automation in interactive consumer products. The goal of the studies was to evaluate the effectiveness of two types of automation: perceptual augmentation (i.e. supporting users' action selection and implementation). Furthermore, the effectiveness of non-product information (i.e. labels attached to product) in supporting automation design was evaluated. The findings suggested greater benefits f...

  19. Asleep at the automated wheel-Sleepiness and fatigue during highly automated driving.

    Science.gov (United States)

    Vogelpohl, Tobias; Kühn, Matthias; Hummel, Thomas; Vollrath, Mark

    2018-03-20

    Due to the lack of active involvement in the driving situation and due to monotonous driving environments drivers with automation may be prone to become fatigued faster than manual drivers (e.g. Schömig et al., 2015). However, little is known about the progression of fatigue during automated driving and its effects on the ability to take back manual control after a take-over request. In this driving simulator study with Nö=ö60 drivers we used a three factorial 2ö×ö2ö×ö12 mixed design to analyze the progression (12ö×ö5ömin; within subjects) of driver fatigue in drivers with automation compared to manual drivers (between subjects). Driver fatigue was induced as either mainly sleep related or mainly task related fatigue (between subjects). Additionally, we investigated the drivers' reactions to a take-over request in a critical driving scenario to gain insights into the ability of fatigued drivers to regain manual control and situation awareness after automated driving. Drivers in the automated driving condition exhibited facial indicators of fatigue after 15 to 35ömin of driving. Manual drivers only showed similar indicators of fatigue if they suffered from a lack of sleep and then only after a longer period of driving (approx. 40ömin). Several drivers in the automated condition closed their eyes for extended periods of time. In the driving with automation condition mean automation deactivation times after a take-over request were slower for a certain percentage (about 30%) of the drivers with a lack of sleep (Mö=ö3.2; SDö=ö2.1ös) compared to the reaction times after a long drive (Mö=ö2.4; SDö=ö0.9ös). Drivers with automation also took longer than manual drivers to first glance at the speed display after a take-over request and were more likely to stay behind a braking lead vehicle instead of overtaking it. Drivers are unable to stay alert during extended periods of automated driving without non-driving related tasks. Fatigued drivers could

  20. A Federated Enterprise Architecture and MBSE Modeling Framework for Integrating Design Automation into a Global PLM Approach

    OpenAIRE

    Vosgien , Thomas; Rigger , Eugen; Schwarz , Martin; Shea , Kristina

    2017-01-01

    Part 1: PLM Maturity, Implementation and Adoption; International audience; PLM and Design Automation (DA) are two interdependent and necessary approaches to increase the performance and efficiency of product development processes. Often, DA systems’ usability suffers due to a lack of integration in industrial business environments stemming from the independent consideration of PLM and DA. This article proposes a methodological and modeling framework for developing and deploying DA solutions w...

  1. Design and Implementation of GSM Based Automated Home Security System

    Directory of Open Access Journals (Sweden)

    Love Aggarwal

    2014-05-01

    Full Text Available The Automated Home Security System aims at building a security system for common households using GSM modem, sensors and microcontroller. Since many years, impeccable security system has been the prime need of every man who owns a house. The increasing crime rate has further pressed the need for it. Our system is an initiative in this direction. The system provides security function by monitoring the surroundings at home for intruders, fire, gas leakages etc. using sensors and issue alerts to the owners and local authorities by using GSM via SMS. It provides the automation function as it can control (On/Off the various home appliances while the owners are away via SMS. Thus the Automated Home Security System is self-sufficient and can be relied upon undoubtedly. Also, it is capable of establishing two way communication with its owner so that he/she can keep a watch on his/her home via sensor information or live video streaming. A camera can be installed for continuous monitoring of the system and its surroundings. The system consists of two main parts: hardware and software. Hardware consists of Microcontroller, Sensors, Buzzer and GSM modem while software is implemented by tools using Embedded ‘C’.

  2. Automation of Electrical Cable Harnesses Testing

    Directory of Open Access Journals (Sweden)

    Zhuming Bi

    2017-12-01

    Full Text Available Traditional automated systems, such as industrial robots, are applied in well-structured environments, and many automated systems have a limited adaptability to deal with complexity and uncertainty; therefore, the applications of industrial robots in small- and medium-sized enterprises (SMEs are very limited. The majority of manual operations in SMEs are too complicated for automation. The rapidly developed information technologies (IT has brought new opportunities for the automation of manufacturing and assembly processes in the ill-structured environments. Note that an automation solution should be designed to meet the given requirements of the specified application, and it differs from one application to another. In this paper, we look into the feasibility of automated testing for electric cable harnesses, and our focus is on some of the generic strategies for the improvement of the adaptability of automation solutions. Especially, the concept of modularization is adopted in developing hardware and software to maximize system adaptability in testing a wide scope of products. A proposed system has been implemented, and the system performances have been evaluated by executing tests on actual products. The testing experiments have shown that the automated system outperformed manual operations greatly in terms of cost-saving, productivity and reliability. Due to the potential of increasing system adaptability and cost reduction, the presented work has its theoretical and practical significance for an extension for other automation solutions in SMEs.

  3. Automation of substation design

    Energy Technology Data Exchange (ETDEWEB)

    Milks, D. [Autodesk Inc., San Rafael, CA (United States); Scullion, T. [AutomationForce, Burlington, ON (Canada)

    2010-07-01

    Smart libraries and substation design tools are now being used by electric utilities to simplify and optimize the design of electrical control systems. A rules-based design and database approach was used to determine the placement of connections and for designing schematics, panel layouts wiring diagrams and cable schedules. Substation design solutions can reduce labour resources and field errors. Methods of synthesizing technologies in a database-centric architecture were discussed along with methods of selecting and adopting appropriate design technologies and methods for gaining efficiencies and managing change within organizations. The study demonstrated that the successful adoption of new technology requires a clear understanding of business objectives, technology requirements, and management support. 2 refs., 5 figs.

  4. Design and Operation of an Open, Interoperable Automated Demand Response Infrastructure for Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Piette, Mary Ann; Ghatikar, Girish; Kiliccote, Sila; Watson, David; Koch, Ed; Hennage, Dan

    2009-05-01

    This paper describes the concept for and lessons from the development and field-testing of an open, interoperable communications infrastructure to support automated demand response (auto-DR). Automating DR allows greater levels of participation, improved reliability, and repeatability of the DR in participating facilities. This paper also presents the technical and architectural issues associated with auto-DR and description of the demand response automation server (DRAS), the client/server architecture-based middle-ware used to automate the interactions between the utilities or any DR serving entity and their customers for DR programs. Use case diagrams are presented to show the role of the DRAS between utility/ISO and the clients at the facilities.

  5. Cockpit Automation Technology CSERIAC-CAT

    Science.gov (United States)

    1991-06-01

    AD-A273 124 AL-TR-1991-0078 A R COCKPIT AUTOMATION TECHNOLOGY M CSERIAC- CAT S JULY 1989 - DEC 1990: FINAL REPORT T R Trudy S. Abrams Cindy D. Martin...TITLE AND SUBTITLE 5. FUNDING NUMBERS Cockpit Automation Technology CSERIAC- CAT JUL 89 - DEC 90 PE 62202F Final Report (U) PR 7184 ,___,TA 12 6. AUTHOR(S...Boeing-developed CAT software tools, and for facilitating their use by the cockpit design community. A brief description of the overall task is given

  6. Influence of Cultural, Organizational, and Automation Capability on Human Automation Trust: A Case Study of Auto-GCAS Experimental Test Pilots

    Science.gov (United States)

    Koltai, Kolina; Ho, Nhut; Masequesmay, Gina; Niedober, David; Skoog, Mark; Cacanindin, Artemio; Johnson, Walter; Lyons, Joseph

    2014-01-01

    This paper discusses a case study that examined the influence of cultural, organizational and automation capability upon human trust in, and reliance on, automation. In particular, this paper focuses on the design and application of an extended case study methodology, and on the foundational lessons revealed by it. Experimental test pilots involved in the research and development of the US Air Force's newly developed Automatic Ground Collision Avoidance System served as the context for this examination. An eclectic, multi-pronged approach was designed to conduct this case study, and proved effective in addressing the challenges associated with the case's politically sensitive and military environment. Key results indicate that the system design was in alignment with pilot culture and organizational mission, indicating the potential for appropriate trust development in operational pilots. These include the low-vulnerability/ high risk nature of the pilot profession, automation transparency and suspicion, system reputation, and the setup of and communications among organizations involved in the system development.

  7. AUTOMATION FOR THE SYNTHESIS AND APPLICATION OF PET RADIOPHARMACEUTICALS

    International Nuclear Information System (INIS)

    Alexoff, D.L.

    2001-01-01

    The development of automated systems supporting the production and application of PET radiopharmaceuticals has been an important focus of researchers since the first successes of using carbon-11 (Comar et al., 1979) and fluorine-18 (Reivich et al., 1979) labeled compounds to visualize functional activity of the human brain. These initial successes of imaging the human brain soon led to applications in the human heart (Schelbert et al., 1980), and quickly radiochemists began to see the importance of automation to support PET studies in humans (Lambrecht, 1982; Langstrom et al., 1983). Driven by the necessity of controlling processes emanating high fluxes of 511 KeV photons, and by the tedium of repetitive syntheses for carrying out these human PET investigations, academic and government scientists have designed, developed and tested many useful and novel automated systems in the past twenty years. These systems, originally designed primarily by radiochemists, not only carry out effectively the tasks they were designed for, but also demonstrate significant engineering innovation in the field of laboratory automation

  8. Research Initiatives and Preliminary Results In Automation Design In Airspace Management in Free Flight

    Science.gov (United States)

    Corker, Kevin; Lebacqz, J. Victor (Technical Monitor)

    1997-01-01

    The NASA and the FAA have entered into a joint venture to explore, define, design and implement a new airspace management operating concept. The fundamental premise of that concept is that technologies and procedures need to be developed for flight deck and ground operations to improve the efficiency, the predictability, the flexibility and the safety of airspace management and operations. To that end NASA Ames has undertaken an initial development and exploration of "key concepts" in the free flight airspace management technology development. Human Factors issues in automation aiding design, coupled aiding systems between air and ground, communication protocols in distributed decision making, and analytic techniques for definition of concepts of airspace density and operator cognitive load have been undertaken. This paper reports the progress of these efforts, which are not intended to definitively solve the many evolving issues of design for future ATM systems, but to provide preliminary results to chart the parameters of performance and the topology of the analytic effort required. The preliminary research in provision of cockpit display of traffic information, dynamic density definition, distributed decision making, situation awareness models and human performance models is discussed as they focus on the theme of "design requirements".

  9. Human-Automation Allocations for Current Robotic Space Operations

    Science.gov (United States)

    Marquez, Jessica J.; Chang, Mai L.; Beard, Bettina L.; Kim, Yun Kyung; Karasinski, John A.

    2018-01-01

    Within the Human Research Program, one risk delineates the uncertainty surrounding crew working with automation and robotics in spaceflight. The Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is concerned with the detrimental effects on crew performance due to ineffective user interfaces, system designs and/or functional task allocation, potentially compromising mission success and safety. Risk arises because we have limited experience with complex automation and robotics. One key gap within HARI, is the gap related to functional allocation. The gap states: We need to evaluate, develop, and validate methods and guidelines for identifying human-automation/robot task information needs, function allocation, and team composition for future long duration, long distance space missions. Allocations determine the human-system performance as it identifies the functions and performance levels required by the automation/robotic system, and in turn, what work the crew is expected to perform and the necessary human performance requirements. Allocations must take into account each of the human, automation, and robotic systems capabilities and limitations. Some functions may be intuitively assigned to the human versus the robot, but to optimize efficiency and effectiveness, purposeful role assignments will be required. The role of automation and robotics will significantly change in future exploration missions, particularly as crew becomes more autonomous from ground controllers. Thus, we must understand the suitability of existing function allocation methods within NASA as well as the existing allocations established by the few robotic systems that are operational in spaceflight. In order to evaluate future methods of robotic allocations, we must first benchmark the allocations and allocation methods that have been used. We will present 1) documentation of human-automation-robotic allocations in existing, operational spaceflight systems; and 2) To

  10. Automated Operations Development for Advanced Exploration Systems

    Science.gov (United States)

    Haddock, Angie T.; Stetson, Howard

    2012-01-01

    Automated space operations command and control software development and its implementation must be an integral part of the vehicle design effort. The software design must encompass autonomous fault detection, isolation, recovery capabilities and also provide "single button" intelligent functions for the crew. Development, operations and safety approval experience with the Timeliner system onboard the International Space Station (ISS), which provided autonomous monitoring with response and single command functionality of payload systems, can be built upon for future automated operations as the ISS Payload effort was the first and only autonomous command and control system to be in continuous execution (6 years), 24 hours a day, 7 days a week within a crewed spacecraft environment. Utilizing proven capabilities from the ISS Higher Active Logic (HAL) System, along with the execution component design from within the HAL 9000 Space Operating System, this design paper will detail the initial HAL System software architecture and interfaces as applied to NASA's Habitat Demonstration Unit (HDU) in support of the Advanced Exploration Systems, Autonomous Mission Operations project. The development and implementation of integrated simulators within this development effort will also be detailed and is the first step in verifying the HAL 9000 Integrated Test-Bed Component [2] designs effectiveness. This design paper will conclude with a summary of the current development status and future development goals as it pertains to automated command and control for the HDU.

  11. Explicit control of adaptive automation under different levels of environmental stress.

    Science.gov (United States)

    Sauer, Jürgen; Kao, Chung-Shan; Wastell, David; Nickel, Peter

    2011-08-01

    This article examines the effectiveness of three different forms of explicit control of adaptive automation under low- and high-stress conditions, operationalised by different levels of noise. In total, 60 participants were assigned to one of three types of automation design (free, prompted and forced choice). They were trained for 4 h on a highly automated simulation of a process control environment, called AutoCAMS. This was followed by a 4-h testing session under noise exposure and quiet conditions. Measures of performance, psychophysiology and subjective reactions were taken. The results showed that all three modes of explicit control of adaptive automation modes were able to attenuate the negative effects of noise. This was partly due to the fact that operators opted for higher levels of automation under noise. It also emerged that forced choice showed marginal advantages over the two other automation modes. Statement of Relevance: This work is relevant to the design of adaptive automation since it emphasises the need to consider the impact of work-related stressors during task completion. During the presence of stressors, different forms of operator support through automation may be required than under more favourable working conditions.

  12. Automated Distributed Simulation in Ptolemy II

    DEFF Research Database (Denmark)

    Lázaro Cuadrado, Daniel; Ravn, Anders Peter; Koch, Peter

    2007-01-01

    the ensuing communication and synchronization problems. Very often the designer has to explicitly specify extra information concerning distribution for the framework to make an effort to exploit parallelism. This paper presents Automated Distributed Simulation (ADS), which allows the designer to forget about...

  13. E-health, phase two: the imperative to integrate process automation with communication automation for large clinical reference laboratories.

    Science.gov (United States)

    White, L; Terner, C

    2001-01-01

    The initial efforts of e-health have fallen far short of expectations. They were buoyed by the hype and excitement of the Internet craze but limited by their lack of understanding of important market and environmental factors. E-health now recognizes that legacy systems and processes are important, that there is a technology adoption process that needs to be followed, and that demonstrable value drives adoption. Initial e-health transaction solutions have targeted mostly low-cost problems. These solutions invariably are difficult to integrate into existing systems, typically requiring manual interfacing to supported processes. This limitation in particular makes them unworkable for large volume providers. To meet the needs of these providers, e-health companies must rethink their approaches, appropriately applying technology to seamlessly integrate all steps into existing business functions. E-automation is a transaction technology that automates steps, integration of steps, and information communication demands, resulting in comprehensive automation of entire business functions. We applied e-automation to create a billing management solution for clinical reference laboratories. Large volume, onerous regulations, small margins, and only indirect access to patients challenge large laboratories' billing departments. Couple these problems with outmoded, largely manual systems and it becomes apparent why most laboratory billing departments are in crisis. Our approach has been to focus on the most significant and costly problems in billing: errors, compliance, and system maintenance and management. The core of the design relies on conditional processing, a "universal" communications interface, and ASP technologies. The result is comprehensive automation of all routine processes, driving out errors and costs. Additionally, compliance management and billing system support and management costs are dramatically reduced. The implications of e-automated processes can extend

  14. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    Science.gov (United States)

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights

  15. Development of a framework of human-centered automation for the nuclear industry

    International Nuclear Information System (INIS)

    Nelson, W.R.; Haney, L.N.

    1993-01-01

    Introduction of automated systems into control rooms for advanced reactor designs is often justified on the basis of increased efficiency and reliability, without a detailed assessment of how the new technologies will influence the role of the operator. Such a ''technology-centered'' approach carries with it the risk that entirely new mechanisms for human error will be introduced, resulting in some unpleasant surprises when the plant goes into operation. The aviation industry has experienced some of these surprises since the introduction of automated systems into the cockpits of advanced technology aircraft. Pilot errors have actually been induced by automated systems, especially when the pilot doesn't fully understand what the automated systems are doing during all modes of operation. In order to structure the research program for investigating these problems, the National Aeronautics and Space Administration (NASA) has developed a framework for human-centered automation. This framework is described in the NASA document Human-Centered Aircraft Automation Philosophy by Charles Billings. It is the thesis of this paper that a corresponding framework of human-centered automation should be developed for the nuclear industry. Such a framework would serve to guide the design and regulation of automated systems for advanced reactor designs, and would help prevent some of the problems that have arisen in other applications that have followed a ''technology-centered'' approach

  16. Rapid prototyping of an automated video surveillance system: a hardware-software co-design approach

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2011-06-01

    FPGA devices with embedded DSP and memory blocks, and high-speed interfaces are ideal for real-time video processing applications. In this work, a hardware-software co-design approach is proposed to effectively utilize FPGA features for a prototype of an automated video surveillance system. Time-critical steps of the video surveillance algorithm are designed and implemented in the FPGAs logic elements to maximize parallel processing. Other non timecritical tasks are achieved by executing a high level language program on an embedded Nios-II processor. Pre-tested and verified video and interface functions from a standard video framework are utilized to significantly reduce development and verification time. Custom and parallel processing modules are integrated into the video processing chain by Altera's Avalon Streaming video protocol. Other data control interfaces are achieved by connecting hardware controllers to a Nios-II processor using Altera's Avalon Memory Mapped protocol.

  17. Aspects of the design of the automated system for code generation of electrical items of technological equipment

    Directory of Open Access Journals (Sweden)

    Erokhin V.V.

    2017-09-01

    Full Text Available The article presents the aspects of designing an automated system for generating codes for electrical elements of process equipment using CASE-means. We propose our own technology of iterative development of such systems. The proposed methodology uses the tool to develop the ERwin Data Modeler databases of Computer Associates and the author's tool for the automatic generation of ERwin Class Builder code. The implemented design tool is a superstructure over the ERwin Data Modeler from Computer Associates, which extends its functionality. ERwin Data Modeler works with logical and physical data models and allows you to generate a description of the database and ddl-scripts.

  18. Process development for automated solar-cell and module production. Task 4. Automated array assembly. Quarterly report No. 3

    Energy Technology Data Exchange (ETDEWEB)

    Hagerty, J. J.; Gifford, M.

    1981-04-15

    The Automated Lamination Station is mechanically complete and is currently undergoing final wiring. The high current driver and isolator boards have been completed and installed, and the main interface board is under construction. The automated vacuum chamber has had a minor redesign to increase stiffness and improve the cover open/close mechanism. Design of the Final Assembly Station has been completed and construction is underway.

  19. QuantifyMe: An Open-Source Automated Single-Case Experimental Design Platform †

    Science.gov (United States)

    Sano, Akane; Ferguson, Craig; Mohan, Akshay; Picard, Rosalind W.

    2018-01-01

    Smartphones and wearable sensors have enabled unprecedented data collection, with many products now providing feedback to users about recommended step counts or sleep durations. However, these recommendations do not provide personalized insights that have been shown to be best suited for a specific individual. A scientific way to find individualized recommendations and causal links is to conduct experiments using single-case experimental design; however, properly designed single-case experiments are not easy to conduct on oneself. We designed, developed, and evaluated a novel platform, QuantifyMe, for novice self-experimenters to conduct proper-methodology single-case self-experiments in an automated and scientific manner using their smartphones. We provide software for the platform that we used (available for free on GitHub), which provides the methodological elements to run many kinds of customized studies. In this work, we evaluate its use with four different kinds of personalized investigations, examining how variables such as sleep duration and regularity, activity, and leisure time affect personal happiness, stress, productivity, and sleep efficiency. We conducted a six-week pilot study (N = 13) to evaluate QuantifyMe. We describe the lessons learned developing the platform and recommendations for its improvement, as well as its potential for enabling personalized insights to be scientifically evaluated in many individuals, reducing the high administrative cost for advancing human health and wellbeing. PMID:29621133

  20. Altering user' acceptance of automation through prior automation exposure.

    Science.gov (United States)

    Bekier, Marek; Molesworth, Brett R C

    2017-06-01

    Air navigation service providers worldwide see increased use of automation as one solution to overcome the capacity constraints imbedded in the present air traffic management (ATM) system. However, increased use of automation within any system is dependent on user acceptance. The present research sought to determine if the point at which an individual is no longer willing to accept or cooperate with automation can be manipulated. Forty participants underwent training on a computer-based air traffic control programme, followed by two ATM exercises (order counterbalanced), one with and one without the aid of automation. Results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation ('tipping point') decreased; suggesting it is indeed possible to alter automation acceptance. Practitioner Summary: This paper investigates whether the point at which a user of automation rejects automation (i.e. 'tipping point') is constant or can be manipulated. The results revealed after exposure to a task with automation assistance, user acceptance of high(er) levels of automation decreased; suggesting it is possible to alter automation acceptance.

  1. Software package to automate the design and production of translucent building structures made of pvc

    Directory of Open Access Journals (Sweden)

    Petrova Irina Yur’evna

    2016-08-01

    Full Text Available The article describes the features of the design and production of translucent building structures made of PVC. The analysis of the automation systems of this process currently existing on the market is carried out, their advantages and disadvantages are identified. Basing on this analysis, a set of requirements for automation systems for the design and production of translucent building structures made of PVC is formulated; the basic entities are involved in those business processes. The necessary functions for the main application and for dealers’ application are specified. The main application is based on technological platform 1C: Enterprise 8.2. The dealers’ module is .NET application and is developed with the use of Microsoft Visual Studio and Microsoft SQL Server because these software products have client versions free for end users (.NET Framework 4.0 Client Profile and Microsoft SQL Server 2008 Express. The features of the developed software complex implementation are described; the relevant charts are given. The scheme of system deployment and protocols of data exchange between 1C server, 1C client and dealer is presented. Also the functions supported by 1C module and .NET module are described. The article describes the content of class library developed for .NET module. The specification of integration of the two applications in a single software package is given. The features of the GUI organization are described; the corresponding screenshots are given. The possible ways of further development of the described software complex are presented and a conclusion about its competitiveness and expediency of new researches is made.

  2. Cassini-Huygens maneuver automation for navigation

    Science.gov (United States)

    Goodson, Troy; Attiyah, Amy; Buffington, Brent; Hahn, Yungsun; Pojman, Joan; Stavert, Bob; Strange, Nathan; Stumpf, Paul; Wagner, Sean; Wolff, Peter; hide

    2006-01-01

    Many times during the Cassini-Huygens mission to Saturn, propulsive maneuvers must be spaced so closely together that there isn't enough time or workforce to execute the maneuver-related software manually, one subsystem at a time. Automation is required. Automating the maneuver design process has involved close cooperation between teams. We present the contribution from the Navigation system. In scope, this includes trajectory propagation and search, generation of ephemerides, general tasks such as email notification and file transfer, and presentation materials. The software has been used to help understand maneuver optimization results, Huygens probe delivery statistics, and Saturn ring-plane crossing geometry. The Maneuver Automation Software (MAS), developed for the Cassini-Huygens program enables frequent maneuvers by handling mundane tasks such as creation of deliverable files, file delivery, generation and transmission of email announcements, generation of presentation material and other supporting documentation. By hand, these tasks took up hours, if not days, of work for each maneuver. Automated, these tasks may be completed in under an hour. During the cruise trajectory the spacing of maneuvers was such that development of a maneuver design could span about a month, involving several other processes in addition to that described, above. Often, about the last five days of this process covered the generation of a final design using an updated orbit-determination estimate. To support the tour trajectory, the orbit determination data cut-off of five days before the maneuver needed to be reduced to approximately one day and the whole maneuver development process needed to be reduced to less than a week..

  3. Automated Intelligent Assistant for mass spectrometry operation

    International Nuclear Information System (INIS)

    Filby, E.E.; Rankin, R.A.; Yoshida, D.E.

    1991-01-01

    The Automated Intelligent Assistant is designed to insure that our mass spectrometers produce timely, high-quality measurement data. The design combines instrument interfacing and expert system technology to automate an adaptable set-point damage prevention strategy. When shutdowns occur, the Assistant can help guide troubleshooting efforts. Stored real-time data will help our development program upgrade and improve the system, and also make it possible to re-run previously-observed instrument problems as ''live'' training exercises for the instrument operators. Initial work has focused on implementing the Assistant for the instrument ultra-high vacuum components. 14 refs., 5 figs

  4. Automated packing systems: review of industrial implementations

    Science.gov (United States)

    Whelan, Paul F.; Batchelor, Bruce G.

    1993-08-01

    A rich theoretical background to the problems that occur in the automation of material handling can be found in operations research, production engineering, systems engineering and automation, more specifically machine vision, literature. This work has contributed towards the design of intelligent handling systems. This paper will review the application of these automated material handling and packing techniques to industrial problems. The discussion will also highlight the systems integration issues involved in these applications. An outline of one such industrial application, the automated placement of shape templates on to leather hides, is also discussed. The purpose of this system is to arrange shape templates on a leather hide in an efficient manner, so as to minimize the leather waste, before they are automatically cut from the hide. These pieces are used in the furniture and car manufacturing industries for the upholstery of high quality leather chairs and car seats. Currently this type of operation is semi-automated. The paper will outline the problems involved in the full automation of such a procedure.

  5. PEMBELAJARAN SISTEM HIDROLIK DAN PNEUMATIK DENGAN MENGGUNAKAN AUTOMATION STUDIO

    Directory of Open Access Journals (Sweden)

    Adi Dewanto

    2015-02-01

    Full Text Available ABSTRACT Students find it difficult to master the hydraulic and pneumatic system due to the lack of imagination on the component movement. It affects students’ learningprocess on the hydraulic and pneumatic system application. In order to solve the problem, the lecturer of Mechatronics course used the Automation Studio application. This software was helpful to design various automations, such as combination of hydraulic system, pneumatic system, electric system, and PLC. The lecturing process and design simulation were conducted by using Automation Studio. In general, the students were so much helped by this program in mastering the theory and practice of hydraulic and Pneumatic. On the other hand, it was found some problem in applying the Automation Studio on the classroom. The problems were limited onthe menu option as well ason the technical aspects related to the number of the computer. The implications from the writers’ experience in using Automation Studio were there was an opportunity for computer programmer to create learningmedia/ software for certain competence which was relevant, accessible and applicable. Also, in case of software preparation, it should be conducted by the lecturers and the students before the learning process. Keywords: automation studio program, learning process, Pneumatic and hydraulic learning

  6. Human-centred automation programme: review of experiment related studies

    International Nuclear Information System (INIS)

    Grimstad, Tone; Andresen, Gisle; Skjerve, Ann Britt Miberg

    2000-04-01

    Twenty-three empirical studies concerning automation and performance have been reviewed. The purposes of the review are to support experimental studies in the Human-Centred Automation (HCA) programme and to develop a general theory on HCA. Each study was reviewed with regard to twelve study characteristics: domain, type of study, purpose, definition of automation, variables, theoretical basis, models of operator performance, methods applied, experimental design, outcome, stated scope of results, strengths and limitations. Seven of the studies involved domain experts, the rest used students as participants. The majority of the articles originated from the aviation domain: only the study conducted in HAMMLAB considered process control in power plants. In the experimental studies, the independent variable was level of automation, or reliability of automation, while the most common dependent variables were workload, situation awareness, complacency, trust, and criteria of performance, e.g., number of correct responses or response time. Although the studies highlight important aspects of human-automation interaction, it is still unclear how system performance is affected. Nevertheless, the fact that many factors seem to be involved is taken as support for the system-oriented approach of the HCA programme. In conclusion, the review provides valuable input both to the design of experiments and to the development of a general theory. (Author). refs

  7. Automated design of analog and high-frequency circuits a computational intelligence approach

    CERN Document Server

    Liu, Bo; Fernández, Francisco V

    2014-01-01

    Computational intelligence techniques are becoming more and more important for automated problem solving nowadays. Due to the growing complexity of industrial applications and the increasingly tight time-to-market requirements, the time available for thorough problem analysis and development of tailored solution methods is decreasing. There is no doubt that this trend will continue in the foreseeable future. Hence, it is not surprising that robust and general automated problem solving methods with satisfactory performance are needed.

  8. FY1995 study of low power LSI design automation software with parallel processing; 1995 nendo heiretsu shori wo katsuyoshita shodenryoku LSI muke sekkei jidoka software no kenkyu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The needs for low power LSIs have rapidly increased recently. For the low power LSI development, not only new circuit technologies but also new design automation tools supporting the new technologies are indispensable. The purpose of this project is to develop a new design automation software, which is able to design new digital LSIs with much lower power than that of conventional CMOS LSIs. A new design automation software for very low power LSIs has been developed targeting the pass-transistor logic SPL, a dedicated low power circuit technology. The software includes a logic synthesis function for pass-transistor-based macrocells and a macrocell placement function. Several new algorithms have been developed for the software, e.g. BDD construction. Some of them are designed and implemented for parallel processing in order to reduce the processing time. The logic synthesis function was tested on a set of benchmarks and finally applied to a low power CPU design. The designed 8-bit CPU was fully compatible with Zilog Z-80. The power dissipation of the CPU was compared with that of commercial CMOS Z-80. At most 82% of power of CMOS was reduced by the new CPU. On the other hand, parallel processing speed up was measured on the macrocell placement function. 34 folds speed up was realized. (NEDO)

  9. Preliminary Framework for Human-Automation Collaboration

    International Nuclear Information System (INIS)

    Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Spielman, Zachary Alexander

    2015-01-01

    The Department of Energy's Advanced Reactor Technologies Program sponsors research, development and deployment activities through its Next Generation Nuclear Plant, Advanced Reactor Concepts, and Advanced Small Modular Reactor (aSMR) Programs to promote safety, technical, economical, and environmental advancements of innovative Generation IV nuclear energy technologies. The Human Automation Collaboration (HAC) Research Project is located under the aSMR Program, which identifies developing advanced instrumentation and controls and human-machine interfaces as one of four key research areas. It is expected that the new nuclear power plant designs will employ technology significantly more advanced than the analog systems in the existing reactor fleet as well as utilizing automation to a greater extent. Moving towards more advanced technology and more automation does not necessary imply more efficient and safer operation of the plant. Instead, a number of concerns about how these technologies will affect human performance and the overall safety of the plant need to be addressed. More specifically, it is important to investigate how the operator and the automation work as a team to ensure effective and safe plant operation, also known as the human-automation collaboration (HAC). The focus of the HAC research is to understand how various characteristics of automation (such as its reliability, processes, and modes) effect an operator's use and awareness of plant conditions. In other words, the research team investigates how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. This report addresses the Department of Energy milestone M4AT-15IN2302054, Complete Preliminary Framework for Human-Automation Collaboration, by discussing the two phased development of a preliminary HAC framework. The framework developed in the first phase was used as

  10. Preliminary Framework for Human-Automation Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna Helene [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spielman, Zachary Alexander [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The Department of Energy’s Advanced Reactor Technologies Program sponsors research, development and deployment activities through its Next Generation Nuclear Plant, Advanced Reactor Concepts, and Advanced Small Modular Reactor (aSMR) Programs to promote safety, technical, economical, and environmental advancements of innovative Generation IV nuclear energy technologies. The Human Automation Collaboration (HAC) Research Project is located under the aSMR Program, which identifies developing advanced instrumentation and controls and human-machine interfaces as one of four key research areas. It is expected that the new nuclear power plant designs will employ technology significantly more advanced than the analog systems in the existing reactor fleet as well as utilizing automation to a greater extent. Moving towards more advanced technology and more automation does not necessary imply more efficient and safer operation of the plant. Instead, a number of concerns about how these technologies will affect human performance and the overall safety of the plant need to be addressed. More specifically, it is important to investigate how the operator and the automation work as a team to ensure effective and safe plant operation, also known as the human-automation collaboration (HAC). The focus of the HAC research is to understand how various characteristics of automation (such as its reliability, processes, and modes) effect an operator’s use and awareness of plant conditions. In other words, the research team investigates how to best design the collaboration between the operators and the automated systems in a manner that has the greatest positive impact on overall plant performance and reliability. This report addresses the Department of Energy milestone M4AT-15IN2302054, Complete Preliminary Framework for Human-Automation Collaboration, by discussing the two phased development of a preliminary HAC framework. The framework developed in the first phase was used as the

  11. Analysis of an Automated Vehicle Routing Problem in Logistics considering Path Interruption

    Directory of Open Access Journals (Sweden)

    Yong Zhang

    2017-01-01

    Full Text Available The application of automated vehicles in logistics can efficiently reduce the cost of logistics and reduce the potential risks in the last mile. Considering the path restriction in the initial stage of the application of automated vehicles in logistics, the conventional model for a vehicle routing problem (VRP is modified. Thus, the automated vehicle routing problem with time windows (AVRPTW model considering path interruption is established. Additionally, an improved particle swarm optimisation (PSO algorithm is designed to solve this problem. Finally, a case study is undertaken to test the validity of the model and the algorithm. Four automated vehicles are designated to execute all delivery tasks required by 25 stores. Capacities of all of the automated vehicles are almost fully utilised. It is of considerable significance for the promotion of automated vehicles in last-mile situations to develop such research into real problems arising in the initial period.

  12. Advanced Air Traffic Management Research (Human Factors and Automation): NASA Research Initiatives in Human-Centered Automation Design in Airspace Management

    Science.gov (United States)

    Corker, Kevin M.; Condon, Gregory W. (Technical Monitor)

    1996-01-01

    NASA has initiated a significant thrust of research and development focused on providing the flight crew and air traffic managers automation aids to increase capacity in en route and terminal area operations through the use of flexible, more fuel-efficient routing, while improving the level of safety in commercial carrier operations. In that system development, definition of cognitive requirements for integrated multi-operator dynamic aiding systems is fundamental. The core processes of control and the distribution of decision making in that control are undergoing extensive analysis. From our perspective, the human operators and the procedures by which they interact are the fundamental determinants of the safe, efficient, and flexible operation of the system. In that perspective, we have begun to explore what our experience has taught will be the most challenging aspects of designing and integrating human-centered automation in the advanced system. We have performed a full mission simulation looking at the role shift to self-separation on board the aircraft with the rules of the air guiding behavior and the provision of a cockpit display of traffic information and an on-board traffic alert system that seamlessly integrates into the TCAS operations. We have performed and initial investigation of the operational impact of "Dynamic Density" metrics on controller relinquishing and reestablishing full separation authority. (We follow the assumption that responsibility at all times resides with the controller.) This presentation will describe those efforts as well as describe the process by which we will guide the development of error tolerant systems that are sensitive to shifts in operator work load levels and dynamic shifts in the operating point of air traffic management.

  13. An efficient approach to bioconversion kinetic model generation based on automated microscale experimentation integrated with model driven experimental design

    DEFF Research Database (Denmark)

    Chen, B. H.; Micheletti, M.; Baganz, F.

    2009-01-01

    -erythrulose. Experiments were performed using automated microwell studies at the 150 or 800 mu L scale. The derived kinetic parameters were then verified in a second round of experiments where model predictions showed excellent agreement with experimental data obtained under conditions not included in the original......Reliable models of enzyme kinetics are required for the effective design of bioconversion processes. Kinetic expressions of the enzyme-catalysed reaction rate however, are frequently complex and establishing accurate values of kinetic parameters normally requires a large number of experiments....... These can be both time consuming and expensive when working with the types of non-natural chiral intermediates important in pharmaceutical syntheses. This paper presents ail automated microscale approach to the rapid and cost effective generation of reliable kinetic models useful for bioconversion process...

  14. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    Science.gov (United States)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  15. Designing a fully automated multi-bioreactor plant for fast DoE optimization of pharmaceutical protein production.

    Science.gov (United States)

    Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner

    2013-06-01

    The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Complacency and Automation Bias in the Use of Imperfect Automation.

    Science.gov (United States)

    Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L

    2015-08-01

    We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.

  17. Nuclear power generation and automation technology

    International Nuclear Information System (INIS)

    Korei, Yoshiro

    1985-01-01

    The proportion of nuclear power in the total generated electric power has been increasing year after year, and the ensuring of its stable supply has been demanded. For the further development of nuclear power generation, the heightening of economical efficiency which is the largest merit of nuclear power and the public acceptance as a safe and stable electric power source are the important subjects. In order to solve these subjects, in nuclear power generation, various automation techniques have been applied for the purpose of the heightening of reliability, labor saving and the reduction of radiation exposure. Meeting the high needs of automation, the automation technology aided by computers have been applied to the design, manufacture and construction, operation and maintenance of nuclear power plants. Computer-aided design and the examples of design of a reactor building, pipings and a fuel assembly, an automatic welder for pipings of all position TIG welding type, a new central monitoring and control system, an automatic exchanger of control rod-driving mechanism, an automatic in-service inspection system for nozzles and pipings, and a robot for steam generator maintenance are shown. The trend of technical development and an intelligent moving robot, a system maintenance robot and a four legs walking robot are explained. (Kako, I.)

  18. Self-optimizing approach for automated laser resonator alignment

    Science.gov (United States)

    Brecher, C.; Schmitt, R.; Loosen, P.; Guerrero, V.; Pyschny, N.; Pavim, A.; Gatej, A.

    2012-02-01

    Nowadays, the assembly of laser systems is dominated by manual operations, involving elaborate alignment by means of adjustable mountings. From a competition perspective, the most challenging problem in laser source manufacturing is price pressure, a result of cost competition exerted mainly from Asia. From an economical point of view, an automated assembly of laser systems defines a better approach to produce more reliable units at lower cost. However, the step from today's manual solutions towards an automated assembly requires parallel developments regarding product design, automation equipment and assembly processes. This paper introduces briefly the idea of self-optimizing technical systems as a new approach towards highly flexible automation. Technically, the work focuses on the precision assembly of laser resonators, which is one of the final and most crucial assembly steps in terms of beam quality and laser power. The paper presents a new design approach for miniaturized laser systems and new automation concepts for a robot-based precision assembly, as well as passive and active alignment methods, which are based on a self-optimizing approach. Very promising results have already been achieved, considerably reducing the duration and complexity of the laser resonator assembly. These results as well as future development perspectives are discussed.

  19. Automation trust and attention allocation in multitasking workspace.

    Science.gov (United States)

    Karpinsky, Nicole D; Chancey, Eric T; Palmer, Dakota B; Yamani, Yusuke

    2018-07-01

    Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. Copyright © 2018. Published by Elsevier Ltd.

  20. Automated sampling and data processing derived from biomimetic membranes

    International Nuclear Information System (INIS)

    Perry, M; Vissing, T; Hansen, J S; Nielsen, C H; Boesen, T P; Emneus, J

    2009-01-01

    Recent advances in biomimetic membrane systems have resulted in an increase in membrane lifetimes from hours to days and months. Long-lived membrane systems demand the development of both new automated monitoring equipment capable of measuring electrophysiological membrane characteristics and new data processing software to analyze and organize the large amounts of data generated. In this work, we developed an automated instrumental voltage clamp solution based on a custom-designed software controller application (the WaveManager), which enables automated on-line voltage clamp data acquisition applicable to long-time series experiments. We designed another software program for off-line data processing. The automation of the on-line voltage clamp data acquisition and off-line processing was furthermore integrated with a searchable database (DiscoverySheet(TM)) for efficient data management. The combined solution provides a cost efficient and fast way to acquire, process and administrate large amounts of voltage clamp data that may be too laborious and time consuming to handle manually. (communication)