WorldWideScience

Sample records for vlsi processing arrays

  1. Design of two easily-testable VLSI array multipliers

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, J.; Shen, J.P.

    1983-01-01

    Array multipliers are well-suited to VLSI implementation because of the regularity in their iterative structure. However, most VLSI circuits are very difficult to test. This paper shows that, with appropriate cell design, array multipliers can be designed to be very easily testable. An array multiplier is called c-testable if all its adder cells can be exhaustively tested while requiring only a constant number of test patterns. The testability of two well-known array multiplier structures are studied. The conventional design of the carry-save array multipler is shown to be not c-testable. However, a modified design, using a modified adder cell, is generated and shown to be c-testable and requires only 16 test patterns. Similar results are obtained for the baugh-wooley two's complement array multiplier. A modified design of the baugh-wooley array multiplier is shown to be c-testable and requires 55 test patterns. The implementation of a practical c-testable 16*16 array multiplier is also presented. 10 references.

  2. Plasma processing for VLSI

    CERN Document Server

    Einspruch, Norman G

    1984-01-01

    VLSI Electronics: Microstructure Science, Volume 8: Plasma Processing for VLSI (Very Large Scale Integration) discusses the utilization of plasmas for general semiconductor processing. It also includes expositions on advanced deposition of materials for metallization, lithographic methods that use plasmas as exposure sources and for multiple resist patterning, and device structures made possible by anisotropic etching.This volume is divided into four sections. It begins with the history of plasma processing, a discussion of some of the early developments and trends for VLSI. The second section

  3. Wavelength-encoded OCDMA system using opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  4. Wavelength-encoded OCDMA system using opto-VLSI processors

    Science.gov (United States)

    Aljada, Muhsen; Alameh, Kamal

    2007-07-01

    We propose and experimentally demonstrate a 2.5 Gbits/sper user wavelength-encoded optical code-division multiple-access encoder-decoder structure based on opto-VLSI processing. Each encoder and decoder is constructed using a single 1D opto-very-large-scale-integrated (VLSI) processor in conjunction with a fiber Bragg grating (FBG) array of different Bragg wavelengths. The FBG array spectrally and temporally slices the broadband input pulse into several components and the opto-VLSI processor generates codewords using digital phase holograms. System performance is measured in terms of the autocorrelation and cross-correlation functions as well as the eye diagram.

  5. VLSI signal processing technology

    CERN Document Server

    Swartzlander, Earl

    1994-01-01

    This book is the first in a set of forthcoming books focussed on state-of-the-art development in the VLSI Signal Processing area. It is a response to the tremendous research activities taking place in that field. These activities have been driven by two factors: the dramatic increase in demand for high speed signal processing, especially in consumer elec­ tronics, and the evolving microelectronic technologies. The available technology has always been one of the main factors in determining al­ gorithms, architectures, and design strategies to be followed. With every new technology, signal processing systems go through many changes in concepts, design methods, and implementation. The goal of this book is to introduce the reader to the main features of VLSI Signal Processing and the ongoing developments in this area. The focus of this book is on: • Current developments in Digital Signal Processing (DSP) pro­ cessors and architectures - several examples and case studies of existing DSP chips are discussed in...

  6. VLSI design

    CERN Document Server

    Basu, D K

    2014-01-01

    Very Large Scale Integrated Circuits (VLSI) design has moved from costly curiosity to an everyday necessity, especially with the proliferated applications of embedded computing devices in communications, entertainment and household gadgets. As a result, more and more knowledge on various aspects of VLSI design technologies is becoming a necessity for the engineering/technology students of various disciplines. With this goal in mind the course material of this book has been designed to cover the various fundamental aspects of VLSI design, like Categorization and comparison between various technologies used for VLSI design Basic fabrication processes involved in VLSI design Design of MOS, CMOS and Bi CMOS circuits used in VLSI Structured design of VLSI Introduction to VHDL for VLSI design Automated design for placement and routing of VLSI systems VLSI testing and testability The various topics of the book have been discussed lucidly with analysis, when required, examples, figures and adequate analytical and the...

  7. VLSI design

    CERN Document Server

    Chandrasetty, Vikram Arkalgud

    2011-01-01

    This book provides insight into the practical design of VLSI circuits. It is aimed at novice VLSI designers and other enthusiasts who would like to understand VLSI design flows. Coverage includes key concepts in CMOS digital design, design of DSP and communication blocks on FPGAs, ASIC front end and physical design, and analog and mixed signal design. The approach is designed to focus on practical implementation of key elements of the VLSI design process, in order to make the topic accessible to novices. The design concepts are demonstrated using software from Mathworks, Xilinx, Mentor Graphic

  8. Opto-VLSI-based reconfigurable free-space optical interconnects architecture

    DEFF Research Database (Denmark)

    Aljada, Muhsen; Alameh, Kamal; Chung, Il-Sug

    2007-01-01

    is the Opto-VLSI processor which can be driven by digital phase steering and multicasting holograms that reconfigure the optical interconnects between the input and output ports. The optical interconnects architecture is experimentally demonstrated at 2.5 Gbps using high-speed 1×3 VCSEL array and 1......×3 photoreceiver array in conjunction with two 1×4096 pixel Opto-VLSI processors. The minimisation of the crosstalk between the output ports is achieved by appropriately aligning the VCSEL and PD elements with respect to the Opto-VLSI processors and driving the latter with optimal steering phase holograms....

  9. VLSI electronics microstructure science

    CERN Document Server

    1982-01-01

    VLSI Electronics: Microstructure Science, Volume 4 reviews trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the silicon-on-insulator for VLSI and VHSIC, X-ray lithography, and transient response of electron transport in GaAs using the Monte Carlo method. The technology and manufacturing of high-density magnetic-bubble memories, metallic superlattices, challenge of education for VLSI, and impact of VLSI on medical signal processing are also elaborated. This text likewise covers the impact of VLSI t

  10. A second generation 50 Mbps VLSI level zero processing system prototype

    Science.gov (United States)

    Harris, Jonathan C.; Shi, Jeff; Speciale, Nick; Bennett, Toby

    1994-01-01

    Level Zero Processing (LZP) generally refers to telemetry data processing functions performed at ground facilities to remove all communication artifacts from instrument data. These functions typically include frame synchronization, error detection and correction, packet reassembly and sorting, playback reversal, merging, time-ordering, overlap deletion, and production of annotated data sets. The Data Systems Technologies Division (DSTD) at Goddard Space Flight Center (GSFC) has been developing high-performance Very Large Scale Integration Level Zero Processing Systems (VLSI LZPS) since 1989. The first VLSI LZPS prototype demonstrated 20 Megabits per second (Mbp's) capability in 1992. With a new generation of high-density Application-specific Integrated Circuits (ASIC) and a Mass Storage System (MSS) based on the High-performance Parallel Peripheral Interface (HiPPI), a second prototype has been built that achieves full 50 Mbp's performance. This paper describes the second generation LZPS prototype based upon VLSI technologies.

  11. A novel configurable VLSI architecture design of window-based image processing method

    Science.gov (United States)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  12. VLSI design

    CERN Document Server

    Einspruch, Norman G

    1986-01-01

    VLSI Electronics Microstructure Science, Volume 14: VLSI Design presents a comprehensive exposition and assessment of the developments and trends in VLSI (Very Large Scale Integration) electronics. This volume covers topics that range from microscopic aspects of materials behavior and device performance to the comprehension of VLSI in systems applications. Each article is prepared by a recognized authority. The subjects discussed in this book include VLSI processor design methodology; the RISC (Reduced Instruction Set Computer); the VLSI testing program; silicon compilers for VLSI; and special

  13. VLSI 'smart' I/O module development

    Science.gov (United States)

    Kirk, Dan

    The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.

  14. VLSI design of an RSA encryption/decryption chip using systolic array based architecture

    Science.gov (United States)

    Sun, Chi-Chia; Lin, Bor-Shing; Jan, Gene Eu; Lin, Jheng-Yi

    2016-09-01

    This article presents the VLSI design of a configurable RSA public key cryptosystem supporting the 512-bit, 1024-bit and 2048-bit based on Montgomery algorithm achieving comparable clock cycles of current relevant works but with smaller die size. We use binary method for the modular exponentiation and adopt Montgomery algorithm for the modular multiplication to simplify computational complexity, which, together with the systolic array concept for electric circuit designs effectively, lower the die size. The main architecture of the chip consists of four functional blocks, namely input/output modules, registers module, arithmetic module and control module. We applied the concept of systolic array to design the RSA encryption/decryption chip by using VHDL hardware language and verified using the TSMC/CIC 0.35 m 1P4 M technology. The die area of the 2048-bit RSA chip without the DFT is 3.9 × 3.9 mm2 (4.58 × 4.58 mm2 with DFT). Its average baud rate can reach 10.84 kbps under a 100 MHz clock.

  15. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    Science.gov (United States)

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  16. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  17. VLSI in medicine

    CERN Document Server

    Einspruch, Norman G

    1989-01-01

    VLSI Electronics Microstructure Science, Volume 17: VLSI in Medicine deals with the more important applications of VLSI in medical devices and instruments.This volume is comprised of 11 chapters. It begins with an article about medical electronics. The following three chapters cover diagnostic imaging, focusing on such medical devices as magnetic resonance imaging, neurometric analyzer, and ultrasound. Chapters 5, 6, and 7 present the impact of VLSI in cardiology. The electrocardiograph, implantable cardiac pacemaker, and the use of VLSI in Holter monitoring are detailed in these chapters. The

  18. VLSI implementations for image communications

    CERN Document Server

    Pirsch, P

    1993-01-01

    The past few years have seen a rapid growth in image processing and image communication technologies. New video services and multimedia applications are continuously being designed. Essential for all these applications are image and video compression techniques. The purpose of this book is to report on recent advances in VLSI architectures and their implementation for video signal processing applications with emphasis on video coding for bit rate reduction. Efficient VLSI implementation for video signal processing spans a broad range of disciplines involving algorithms, architectures, circuits

  19. VLSI electronics microstructure science

    CERN Document Server

    1981-01-01

    VLSI Electronics: Microstructure Science, Volume 3 evaluates trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the impact of VLSI on computer architectures; VLSI design and design aid requirements; and design, fabrication, and performance of CCD imagers. The approaches, potential, and progress of ultra-high-speed GaAs VLSI; computer modeling of MOSFETs; and numerical physics of micron-length and submicron-length semiconductor devices are also elaborated. This text likewise covers the optical linewi

  20. Technology computer aided design simulation for VLSI MOSFET

    CERN Document Server

    Sarkar, Chandan Kumar

    2013-01-01

    Responding to recent developments and a growing VLSI circuit manufacturing market, Technology Computer Aided Design: Simulation for VLSI MOSFET examines advanced MOSFET processes and devices through TCAD numerical simulations. The book provides a balanced summary of TCAD and MOSFET basic concepts, equations, physics, and new technologies related to TCAD and MOSFET. A firm grasp of these concepts allows for the design of better models, thus streamlining the design process, saving time and money. This book places emphasis on the importance of modeling and simulations of VLSI MOS transistors and

  1. Parallel VLSI Architecture

    Science.gov (United States)

    Truong, T. K.; Reed, I.; Yeh, C.; Shao, H.

    1985-01-01

    Fermat number transformation convolutes two digital data sequences. Very-large-scale integration (VLSI) applications, such as image and radar signal processing, X-ray reconstruction, and spectrum shaping, linear convolution of two digital data sequences of arbitrary lenghts accomplished using Fermat number transform (ENT).

  2. VLSI structures for track finding

    International Nuclear Information System (INIS)

    Dell'Orso, M.

    1989-01-01

    We discuss the architecture of a device based on the concept of associative memory designed to solve the track finding problem, typical of high energy physics experiments, in a time span of a few microseconds even for very high multiplicity events. This ''machine'' is implemented as a large array of custom VLSI chips. All the chips are equal and each of them stores a number of ''patterns''. All the patterns in all the chips are compared in parallel to the data coming from the detector while the detector is being read out. (orig.)

  3. UW VLSI chip tester

    Science.gov (United States)

    McKenzie, Neil

    1989-12-01

    We present a design for a low-cost, functional VLSI chip tester. It is based on the Apple MacIntosh II personal computer. It tests chips that have up to 128 pins. All pin drivers of the tester are bidirectional; each pin is programmed independently as an input or an output. The tester can test both static and dynamic chips. Rudimentary speed testing is provided. Chips are tested by executing C programs written by the user. A software library is provided for program development. Tests run under both the Mac Operating System and A/UX. The design is implemented using Xilinx Logic Cell Arrays. Price/performance tradeoffs are discussed.

  4. Implementation of a VLSI Level Zero Processing system utilizing the functional component approach

    Science.gov (United States)

    Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.

    1991-01-01

    A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.

  5. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  6. PLA realizations for VLSI state machines

    Science.gov (United States)

    Gopalakrishnan, S.; Whitaker, S.; Maki, G.; Liu, K.

    1990-01-01

    A major problem associated with state assignment procedures for VLSI controllers is obtaining an assignment that produces minimal or near minimal logic. The key item in Programmable Logic Array (PLA) area minimization is the number of unique product terms required by the design equations. This paper presents a state assignment algorithm for minimizing the number of product terms required to implement a finite state machine using a PLA. Partition algebra with predecessor state information is used to derive a near optimal state assignment. A maximum bound on the number of product terms required can be obtained by inspecting the predecessor state information. The state assignment algorithm presented is much simpler than existing procedures and leads to the same number of product terms or less. An area-efficient PLA structure implemented in a 1.0 micron CMOS process is presented along with a summary of the performance for a controller implemented using this design procedure.

  7. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  8. Lithography for VLSI

    CERN Document Server

    Einspruch, Norman G

    1987-01-01

    VLSI Electronics Microstructure Science, Volume 16: Lithography for VLSI treats special topics from each branch of lithography, and also contains general discussion of some lithographic methods.This volume contains 8 chapters that discuss the various aspects of lithography. Chapters 1 and 2 are devoted to optical lithography. Chapter 3 covers electron lithography in general, and Chapter 4 discusses electron resist exposure modeling. Chapter 5 presents the fundamentals of ion-beam lithography. Mask/wafer alignment for x-ray proximity printing and for optical lithography is tackled in Chapter 6.

  9. Lithography requirements in complex VLSI device fabrication

    International Nuclear Information System (INIS)

    Wilson, A.D.

    1985-01-01

    Fabrication of complex very large scale integration (VLSI) circuits requires continual advances in lithography to satisfy: decreasing minimum linewidths, larger chip sizes, tighter linewidth and overlay control, increasing topography to linewidth ratios, higher yield demands, increased throughput, harsher device processing, lower lithography cost, and a larger part number set with quick turn-around time. Where optical, electron beam, x-ray, and ion beam lithography can be applied to judiciously satisfy the complex VLSI circuit fabrication requirements is discussed and those areas that are in need of major further advances are addressed. Emphasis will be placed on advanced electron beam and storage ring x-ray lithography

  10. A VLSI image processor via pseudo-mersenne transforms

    International Nuclear Information System (INIS)

    Sei, W.J.; Jagadeesh, J.M.

    1986-01-01

    The computational burden on image processing in medical fields where a large amount of information must be processed quickly and accurately has led to consideration of special-purpose image processor chip design for some time. The very large scale integration (VLSI) resolution has made it cost-effective and feasible to consider the design of special purpose chips for medical imaging fields. This paper describes a VLSI CMOS chip suitable for parallel implementation of image processing algorithms and cyclic convolutions by using Pseudo-Mersenne Number Transform (PMNT). The main advantages of the PMNT over the Fast Fourier Transform (FFT) are: (1) no multiplications are required; (2) integer arithmetic is used. The design and development of this processor, which operates on 32-point convolution or 5 x 5 window image, are described

  11. ORGANIZATION OF GRAPHIC INFORMATION FOR VIEWING THE MULTILAYER VLSI TOPOLOGY

    Directory of Open Access Journals (Sweden)

    V. I. Romanov

    2016-01-01

    Full Text Available One of the possible ways to reorganize of graphical information describing the set of topology layers of modern VLSI. The method is directed on the use in the conditions of the bounded size of video card memory. An additional effect, providing high performance of forming multi- image layout a multi-layer topology of modern VLSI, is achieved by preloading the required texture by means of auxiliary background process.

  12. Surface and interface effects in VLSI

    CERN Document Server

    Einspruch, Norman G

    1985-01-01

    VLSI Electronics Microstructure Science, Volume 10: Surface and Interface Effects in VLSI provides the advances made in the science of semiconductor surface and interface as they relate to electronics. This volume aims to provide a better understanding and control of surface and interface related properties. The book begins with an introductory chapter on the intimate link between interfaces and devices. The book is then divided into two parts. The first part covers the chemical and geometric structures of prototypical VLSI interfaces. Subjects detailed include, the technologically most import

  13. New domain for image analysis: VLSI circuits testing, with Romuald, specialized in parallel image processing

    Energy Technology Data Exchange (ETDEWEB)

    Rubat Du Merac, C; Jutier, P; Laurent, J; Courtois, B

    1983-07-01

    This paper describes some aspects of specifying, designing and evaluating a specialized machine, Romuald, for the capture, coding, and processing of video and scanning electron microscope (SEM) pictures. First the authors present the functional organization of the process unit of romuald and its hardware, giving details of its behaviour. Then they study the capture and display unit which, thanks to its flexibility, enables SEM images coding. Finally, they describe an application which is now being developed in their laboratory: testing VLSI circuits with new methods: sem+voltage contrast and image processing. 15 references.

  14. The VLSI handbook

    CERN Document Server

    Chen, Wai-Kai

    2007-01-01

    Written by a stellar international panel of expert contributors, this handbook remains the most up-to-date, reliable, and comprehensive source for real answers to practical problems. In addition to updated information in most chapters, this edition features several heavily revised and completely rewritten chapters, new chapters on such topics as CMOS fabrication and high-speed circuit design, heavily revised sections on testing of digital systems and design languages, and two entirely new sections on low-power electronics and VLSI signal processing. An updated compendium of references and othe

  15. VLSI Architectures for Computing DFT's

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.

    1986-01-01

    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  16. Electro-optic techniques for VLSI interconnect

    Science.gov (United States)

    Neff, J. A.

    1985-03-01

    A major limitation to achieving significant speed increases in very large scale integration (VLSI) lies in the metallic interconnects. They are costly not only from the charge transport standpoint but also from capacitive loading effects. The Defense Advanced Research Projects Agency, in pursuit of the fifth generation supercomputer, is investigating alternatives to the VLSI metallic interconnects, especially the use of optical techniques to transport the information either inter or intrachip. As the on chip performance of VLSI continues to improve via the scale down of the logic elements, the problems associated with transferring data off and onto the chip become more severe. The use of optical carriers to transfer the information within the computer is very appealing from several viewpoints. Besides the potential for gigabit propagation rates, the conversion from electronics to optics conveniently provides a decoupling of the various circuits from one another. Significant gains will also be realized in reducing cross talk between the metallic routings, and the interconnects need no longer be constrained to the plane of a thin film on the VLSI chip. In addition, optics can offer an increased programming flexibility for restructuring the interconnect network.

  17. Signal Processing in Medical Ultrasound B-mode Imaging

    International Nuclear Information System (INIS)

    Song, Tai Kyong

    2000-01-01

    Ultrasonic imaging is the most widely used modality among modern imaging device for medical diagnosis and the system performance has been improved dramatically since early 90's due to the rapid advances in DSP performance and VLSI technology that made it possible to employ more sophisticated algorithms. This paper describes 'main stream' digital signal processing functions along with the associated implementation considerations in modern medical ultrasound imaging systems. Topics covered include signal processing methods for resolution improvement, ultrasound imaging system architectures, roles and necessity of the applications of DSP and VLSI technology in the development of the medical ultrasound imaging systems, and array signal processing techniques for ultrasound focusing

  18. Multi-valued LSI/VLSI logic design

    Science.gov (United States)

    Santrakul, K.

    A procedure for synthesizing any large complex logic system, such as LSI and VLSI integrated circuits is described. This scheme uses Multi-Valued Multi-plexers (MVMUX) as the basic building blocks and the tree as the structure of the circuit realization. Simple built-in test circuits included in the network (the main current), provide a thorough functional checking of the network at any time. In brief, four major contributions are made: (1) multi-valued Algorithmic State Machine (ASM) chart for describing an LSI/VLSI behavior; (2) a tree-structured multi-valued multiplexer network which can be obtained directly from an ASM chart; (3) a heuristic tree-structured synthesis method for realizing any combinational logic with minimal or nearly-minimal MVMUX; and (4) a hierarchical design of LSI/VLSI with built-in parallel testing capability.

  19. VLSI Design of SVM-Based Seizure Detection System With On-Chip Learning Capability.

    Science.gov (United States)

    Feng, Lichen; Li, Zunchao; Wang, Yuanfa

    2018-02-01

    Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time-frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.

  20. Prototype architecture for a VLSI level zero processing system. [Space Station Freedom

    Science.gov (United States)

    Shi, Jianfei; Grebowsky, Gerald J.; Horner, Ward P.; Chesney, James R.

    1989-01-01

    The prototype architecture and implementation of a high-speed level zero processing (LZP) system are discussed. Due to the new processing algorithm and VLSI technology, the prototype LZP system features compact size, low cost, high processing throughput, and easy maintainability and increased reliability. Though extensive control functions have been done by hardware, the programmability of processing tasks makes it possible to adapt the system to different data formats and processing requirements. It is noted that the LZP system can handle up to 8 virtual channels and 24 sources with combined data volume of 15 Gbytes per orbit. For greater demands, multiple LZP systems can be configured in parallel, each called a processing channel and assigned a subset of virtual channels. The telemetry data stream will be steered into different processing channels in accordance with their virtual channel IDs. This super system can cope with a virtually unlimited number of virtual channels and sources. In the near future, it is expected that new disk farms with data rate exceeding 150 Mbps will be available from commercial vendors due to the advance in disk drive technology.

  1. Embedded Processor Based Automatic Temperature Control of VLSI Chips

    Directory of Open Access Journals (Sweden)

    Narasimha Murthy Yayavaram

    2009-01-01

    Full Text Available This paper presents embedded processor based automatic temperature control of VLSI chips, using temperature sensor LM35 and ARM processor LPC2378. Due to the very high packing density, VLSI chips get heated very soon and if not cooled properly, the performance is very much affected. In the present work, the sensor which is kept very near proximity to the IC will sense the temperature and the speed of the fan arranged near to the IC is controlled based on the PWM signal generated by the ARM processor. A buzzer is also provided with the hardware, to indicate either the failure of the fan or overheating of the IC. The entire process is achieved by developing a suitable embedded C program.

  2. VLSI Architecture for Configurable and Low-Complexity Design of Hard-Decision Viterbi Decoding Algorithm

    Directory of Open Access Journals (Sweden)

    Rachmad Vidya Wicaksana Putra

    2016-06-01

    Full Text Available Convolutional encoding and data decoding are fundamental processes in convolutional error correction. One of the most popular error correction methods in decoding is the Viterbi algorithm. It is extensively implemented in many digital communication applications. Its VLSI design challenges are about area, speed, power, complexity and configurability. In this research, we specifically propose a VLSI architecture for a configurable and low-complexity design of a hard-decision Viterbi decoding algorithm. The configurable and low-complexity design is achieved by designing a generic VLSI architecture, optimizing each processing element (PE at the logical operation level and designing a conditional adapter. The proposed design can be configured for any predefined number of trace-backs, only by changing the trace-back parameter value. Its computational process only needs N + 2 clock cycles latency, with N is the number of trace-backs. Its configurability function has been proven for N = 8, N = 16, N = 32 and N = 64. Furthermore, the proposed design was synthesized and evaluated in Xilinx and Altera FPGA target boards for area consumption and speed performance.

  3. Convolving optically addressed VLSI liquid crystal SLM

    Science.gov (United States)

    Jared, David A.; Stirk, Charles W.

    1994-03-01

    We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.

  4. Compact MOSFET models for VLSI design

    CERN Document Server

    Bhattacharyya, A B

    2009-01-01

    Practicing designers, students, and educators in the semiconductor field face an ever expanding portfolio of MOSFET models. In Compact MOSFET Models for VLSI Design , A.B. Bhattacharyya presents a unified perspective on the topic, allowing the practitioner to view and interpret device phenomena concurrently using different modeling strategies. Readers will learn to link device physics with model parameters, helping to close the gap between device understanding and its use for optimal circuit performance. Bhattacharyya also lays bare the core physical concepts that will drive the future of VLSI.

  5. Fast-prototyping of VLSI

    International Nuclear Information System (INIS)

    Saucier, G.; Read, E.

    1987-01-01

    Fast-prototyping will be a reality in the very near future if both straightforward design methods and fast manufacturing facilities are available. This book focuses, first, on the motivation for fast-prototyping. Economic aspects and market considerations are analysed by European and Japanese companies. In the second chapter, new design methods are identified, mainly for full custom circuits. Of course, silicon compilers play a key role and the introduction of artificial intelligence techniques sheds a new light on the subject. At present, fast-prototyping on gate arrays or on standard cells is the most conventional technique and the third chapter updates the state-of-the art in this area. The fourth chapter concentrates specifically on the e-beam direct-writing for submicron IC technologies. In the fifth chapter, a strategic point in fast-prototyping, namely the test problem is addressed. The design for testability and the interface to the test equipment are mandatory to fulfill the test requirement for fast-prototyping. Finally, the last chapter deals with the subject of education when many people complain about the lack of use of fast-prototyping in higher education for VLSI

  6. Applications of VLSI circuits to medical imaging

    International Nuclear Information System (INIS)

    O'Donnell, M.

    1988-01-01

    In this paper the application of advanced VLSI circuits to medical imaging is explored. The relationship of both general purpose signal processing chips and custom devices to medical imaging is discussed using examples of fabricated chips. In addition, advanced CAD tools for silicon compilation are presented. Devices built with these tools represent a possible alternative to custom devices and general purpose signal processors for the next generation of medical imaging systems

  7. Design of a Low-Power VLSI Macrocell for Nonlinear Adaptive Video Noise Reduction

    Directory of Open Access Journals (Sweden)

    Sergio Saponara

    2004-09-01

    Full Text Available A VLSI macrocell for edge-preserving video noise reduction is proposed in the paper. It is based on a nonlinear rational filter enhanced by a noise estimator for blind and dynamic adaptation of the filtering parameters to the input signal statistics. The VLSI filter features a modular architecture allowing the extension of both mask size and filtering directions. Both spatial and spatiotemporal algorithms are supported. Simulation results with monochrome test videos prove its efficiency for many noise distributions with PSNR improvements up to 3.8 dB with respect to a nonadaptive solution. The VLSI macrocell has been realized in a 0.18 μm CMOS technology using a standard-cells library; it allows for real-time processing of main video formats, up to 30 fps (frames per second 4CIF, with a power consumption in the order of few mW.

  8. Artificial immune system algorithm in VLSI circuit configuration

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    In artificial intelligence, the artificial immune system is a robust bio-inspired heuristic method, extensively used in solving many constraint optimization problems, anomaly detection, and pattern recognition. This paper discusses the implementation and performance of artificial immune system (AIS) algorithm integrated with Hopfield neural networks for VLSI circuit configuration based on 3-Satisfiability problems. Specifically, we emphasized on the clonal selection technique in our binary artificial immune system algorithm. We restrict our logic construction to 3-Satisfiability (3-SAT) clauses in order to outfit with the transistor configuration in VLSI circuit. The core impetus of this research is to find an ideal hybrid model to assist in the VLSI circuit configuration. In this paper, we compared the artificial immune system (AIS) algorithm (HNN-3SATAIS) with the brute force algorithm incorporated with Hopfield neural network (HNN-3SATBF). Microsoft Visual C++ 2013 was used as a platform for training, simulating and validating the performances of the proposed network. The results depict that the HNN-3SATAIS outperformed HNN-3SATBF in terms of circuit accuracy and CPU time. Thus, HNN-3SATAIS can be used to detect an early error in the VLSI circuit design.

  9. Hybrid VLSI/QCA Architecture for Computing FFTs

    Science.gov (United States)

    Fijany, Amir; Toomarian, Nikzad; Modarres, Katayoon; Spotnitz, Matthew

    2003-01-01

    A data-processor architecture that would incorporate elements of both conventional very-large-scale integrated (VLSI) circuitry and quantum-dot cellular automata (QCA) has been proposed to enable the highly parallel and systolic computation of fast Fourier transforms (FFTs). The proposed circuit would complement the QCA-based circuits described in several prior NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; and Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35. The cited prior articles described the limitations of very-large-scale integrated (VLSI) circuitry and the major potential advantage afforded by QCA. To recapitulate: In a VLSI circuit, signal paths that are required not to interact with each other must not cross in the same plane. In contrast, for reasons too complex to describe in the limited space available for this article, suitably designed and operated QCAbased signal paths that are required not to interact with each other can nevertheless be allowed to cross each other in the same plane without adverse effect. In principle, this characteristic could be exploited to design compact, coplanar, simple (relative to VLSI) QCA-based networks to implement complex, advanced interconnection schemes.

  10. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    National Research Council Canada - National Science Library

    Horiuchi, Timothy K; Krishnaprasad, P. S

    2007-01-01

    .... This includes multiple efforts related to a VLSI-based echolocation system being developed in one of our laboratories from algorithm development, bat flight data analysis, to VLSI circuit design...

  11. A memory-array architecture for computer vision

    Energy Technology Data Exchange (ETDEWEB)

    Balsara, P.T.

    1989-01-01

    With the fast advances in the area of computer vision and robotics there is a growing need for machines that can understand images at a very high speed. A conventional von Neumann computer is not suited for this purpose because it takes a tremendous amount of time to solve most typical image processing problems. Exploiting the inherent parallelism present in various vision tasks can significantly reduce the processing time. Fortunately, parallelism is increasingly affordable as hardware gets cheaper. Thus it is now imperative to study computer vision in a parallel processing framework. The author should first design a computational structure which is well suited for a wide range of vision tasks and then develop parallel algorithms which can run efficiently on this structure. Recent advances in VLSI technology have led to several proposals for parallel architectures for computer vision. In this thesis he demonstrates that a memory array architecture with efficient local and global communication capabilities can be used for high speed execution of a wide range of computer vision tasks. This architecture, called the Access Constrained Memory Array Architecture (ACMAA), is efficient for VLSI implementation because of its modular structure, simple interconnect and limited global control. Several parallel vision algorithms have been designed for this architecture. The choice of vision problems demonstrates the versatility of ACMAA for a wide range of vision tasks. These algorithms were simulated on a high level ACMAA simulator running on the Intel iPSC/2 hypercube, a parallel architecture. The results of this simulation are compared with those of sequential algorithms running on a single hypercube node. Details of the ACMAA processor architecture are also presented.

  12. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  13. Nano lasers in photonic VLSI

    NARCIS (Netherlands)

    Hill, M.T.; Oei, Y.S.; Smit, M.K.

    2007-01-01

    We examine the use of micro and nano lasers to form digital photonic VLSI building blocks. Problems such as isolation and cascading of building blocks are addressed, and the potential of future nano lasers explored.

  14. VLSI and system architecture-the new development of system 5G

    Energy Technology Data Exchange (ETDEWEB)

    Sakamura, K.; Sekino, A.; Kodaka, T.; Uehara, T.; Aiso, H.

    1982-01-01

    A research and development proposal is presented for VLSI CAD systems and for a hardware environment called system 5G on which the VLSI CAD systems run. The proposed CAD systems use a hierarchically organized design language to enable design of anything from basic architectures of VLSI to VLSI mask patterns in a uniform manner. The cad systems will eventually become intelligent cad systems that acquire design knowledge and perform automatic design of VLSI chips when the characteristic requirements of VLSI chip is given. System 5G will consist of superinference machines and the 5G communication network. The superinference machine will be built based on a functionally distributed architecture connecting inferommunication network. The superinference machine will be built based on a functionally distributed architecture connecting inference machines and relational data base machines via a high-speed local network. The transfer rate of the local network will be 100 mbps at the first stage of the project and will be improved to 1 gbps. Remote access to the superinference machine will be possible through the 5G communication network. Access to system 5G will use the 5G network architecture protocol. The users will access the system 5G using standardized 5G personal computers. 5G personal logic programming stations, very high intelligent terminals providing an instruction set that supports predicate logic and input/output facilities for audio and graphical information.

  15. The test of VLSI circuits

    Science.gov (United States)

    Baviere, Ph.

    Tests which have proven effective for evaluating VLSI circuits for space applications are described. It is recommended that circuits be examined after each manfacturing step to gain fast feedback on inadequacies in the production system. Data from failure modes which occur during operational lifetimes of circuits also permit redefinition of the manufacturing and quality control process to eliminate the defects identified. Other tests include determination of the operational envelope of the circuits, examination of the circuit response to controlled inputs, and the performance and functional speeds of ROM and RAM memories. Finally, it is desirable that all new circuits be designed with testing in mind.

  16. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  17. Built-in self-repair of VLSI memories employing neural nets

    Science.gov (United States)

    Mazumder, Pinaki

    1998-10-01

    The decades of the Eighties and the Nineties have witnessed the spectacular growth of VLSI technology, when the chip size has increased from a few hundred devices to a staggering multi-millon transistors. This trend is expected to continue as the CMOS feature size progresses towards the nanometric dimension of 100 nm and less. SIA roadmap projects that, where as the DRAM chips will integrate over 20 billion devices in the next millennium, the future microprocessors may incorporate over 100 million transistors on a single chip. As the VLSI chip size increase, the limited accessibility of circuit components poses great difficulty for external diagnosis and replacement in the presence of faulty components. For this reason, extensive work has been done in built-in self-test techniques, but little research is known concerning built-in self-repair. Moreover, the extra hardware introduced by conventional fault-tolerance techniques is also likely to become faulty, therefore causing the circuit to be useless. This research demonstrates the feasibility of implementing electronic neural networks as intelligent hardware for memory array repair. Most importantly, we show that the neural network control possesses a robust and degradable computing capability under various fault conditions. Overall, a yield analysis performed on 64K DRAM's shows that the yield can be improved from as low as 20 percent to near 99 percent due to the self-repair design, with overhead no more than 7 percent.

  18. VLSI Design with Alliance Free CAD Tools: an Implementation Example

    Directory of Open Access Journals (Sweden)

    Chávez-Bracamontes Ramón

    2015-07-01

    Full Text Available This paper presents the methodology used for a digital integrated circuit design that implements the communication protocol known as Serial Peripheral Interface, using the Alliance CAD System. The aim of this paper is to show how the work of VLSI design can be done by graduate and undergraduate students with minimal resources and experience. The physical design was sent to be fabricated using the CMOS AMI C5 process that features 0.5 micrometer in transistor size, sponsored by the MOSIS Educational Program. Tests were made on a platform that transfers data from inertial sensor measurements to the designed SPI chip, which in turn sends the data back on a parallel bus to a common microcontroller. The results show the efficiency of the employed methodology in VLSI design, as well as the feasibility of ICs manufacturing from school projects that have insufficient or no source of funding

  19. Las Vegas is better than determinism in VLSI and distributed computing

    DEFF Research Database (Denmark)

    Mehlhorn, Kurt; Schmidt, Erik Meineche

    1982-01-01

    In this paper we describe a new method for proving lower bounds on the complexity of VLSI - computations and more generally distributed computations. Lipton and Sedgewick observed that the crossing sequence arguments used to prove lower bounds in VLSI (or TM or distributed computing) apply to (ac...

  20. Handbook of VLSI chip design and expert systems

    CERN Document Server

    Schwarz, A F

    1993-01-01

    Handbook of VLSI Chip Design and Expert Systems provides information pertinent to the fundamental aspects of expert systems, which provides a knowledge-based approach to problem solving. This book discusses the use of expert systems in every possible subtask of VLSI chip design as well as in the interrelations between the subtasks.Organized into nine chapters, this book begins with an overview of design automation, which can be identified as Computer-Aided Design of Circuits and Systems (CADCAS). This text then presents the progress in artificial intelligence, with emphasis on expert systems.

  1. VLSI micro- and nanophotonics science, technology, and applications

    CERN Document Server

    Lee, El-Hang; Razeghi, Manijeh; Jagadish, Chennupati

    2011-01-01

    Addressing the growing demand for larger capacity in information technology, VLSI Micro- and Nanophotonics: Science, Technology, and Applications explores issues of science and technology of micro/nano-scale photonics and integration for broad-scale and chip-scale Very Large Scale Integration photonics. This book is a game-changer in the sense that it is quite possibly the first to focus on ""VLSI Photonics"". Very little effort has been made to develop integration technologies for micro/nanoscale photonic devices and applications, so this reference is an important and necessary early-stage pe

  2. Pursuit, Avoidance, and Cohesion in Flight: Multi-Purpose Control Laws and Neuromorphic VLSI

    Science.gov (United States)

    2010-10-01

    spatial navigation in mammals. We have designed, fabricated, and are now testing a neuromorphic VLSI chip that implements a spike-based, attractor...Control Laws and Neuromorphic VLSI 5a. CONTRACT NUMBER 070402-7705 5b. GRANT NUMBER FA9550-07-1-0446 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...implementations (custom Neuromorphic VLSI and robotics) we will apply important practical constraints that can lead to deeper insight into how and why efficient

  3. vPELS: An E-Learning Social Environment for VLSI Design with Content Security Using DRM

    Science.gov (United States)

    Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn

    2014-01-01

    This article provides a proposal for personal e-learning system (vPELS [where "v" stands for VLSI: very large scale integrated circuit])) architecture in the context of social network environment for VLSI Design. The main objective of vPELS is to develop individual skills on a specific subject--say, VLSI--and share resources with peers.…

  4. Design and demonstration of a multitechnology FPGA for photonic information processing

    Science.gov (United States)

    Mal, Prosenjit; Hawk, Chris; Toshniwal, Kavita; Beyette, Fred R., Jr.

    2003-11-01

    We present here a novel architecture for a multi-technology field programmabler gate array (MT-FPGA). Implemented with a conventional CMOS VLSI technology the architecture is suitable for prototyping photonic information processing systems. We report here that this new FPGA architecture will enable the design of reconfigurable systems that incorporate technologies outside the traditional electronic domain.

  5. Sensor array signal processing

    CERN Document Server

    Naidu, Prabhakar S

    2009-01-01

    Chapter One: An Overview of Wavefields 1.1 Types of Wavefields and the Governing Equations 1.2 Wavefield in open space 1.3 Wavefield in bounded space 1.4 Stochastic wavefield 1.5 Multipath propagation 1.6 Propagation through random medium 1.7 ExercisesChapter Two: Sensor Array Systems 2.1 Uniform linear array (ULA) 2.2 Planar array 2.3 Distributed sensor array 2.4 Broadband sensor array 2.5 Source and sensor arrays 2.6 Multi-component sensor array2.7 ExercisesChapter Three: Frequency Wavenumber Processing 3.1 Digital filters in the w-k domain 3.2 Mapping of 1D into 2D filters 3.3 Multichannel Wiener filters 3.4 Wiener filters for ULA and UCA 3.5 Predictive noise cancellation 3.6 Exercises Chapter Four: Source Localization: Frequency Wavenumber Spectrum4.1 Frequency wavenumber spectrum 4.2 Beamformation 4.3 Capon's w-k spectrum 4.4 Maximum entropy w-k spectrum 4.5 Doppler-Azimuth Processing4.6 ExercisesChapter Five: Source Localization: Subspace Methods 5.1 Subspace methods (Narrowband) 5.2 Subspace methods (B...

  6. Application of evolutionary algorithms for multi-objective optimization in VLSI and embedded systems

    CERN Document Server

    2015-01-01

    This book describes how evolutionary algorithms (EA), including genetic algorithms (GA) and particle swarm optimization (PSO) can be utilized for solving multi-objective optimization problems in the area of embedded and VLSI system design. Many complex engineering optimization problems can be modelled as multi-objective formulations. This book provides an introduction to multi-objective optimization using meta-heuristic algorithms, GA and PSO, and how they can be applied to problems like hardware/software partitioning in embedded systems, circuit partitioning in VLSI, design of operational amplifiers in analog VLSI, design space exploration in high-level synthesis, delay fault testing in VLSI testing, and scheduling in heterogeneous distributed systems. It is shown how, in each case, the various aspects of the EA, namely its representation, and operators like crossover, mutation, etc. can be separately formulated to solve these problems. This book is intended for design engineers and researchers in the field ...

  7. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    Science.gov (United States)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  8. Microfluidic very large scale integration (VLSI) modeling, simulation, testing, compilation and physical synthesis

    CERN Document Server

    Pop, Paul; Madsen, Jan

    2016-01-01

    This book presents the state-of-the-art techniques for the modeling, simulation, testing, compilation and physical synthesis of mVLSI biochips. The authors describe a top-down modeling and synthesis methodology for the mVLSI biochips, inspired by microelectronics VLSI methodologies. They introduce a modeling framework for the components and the biochip architecture, and a high-level microfluidic protocol language. Coverage includes a topology graph-based model for the biochip architecture, and a sequencing graph to model for biochemical application, showing how the application model can be obtained from the protocol language. The techniques described facilitate programmability and automation, enabling developers in the emerging, large biochip market. · Presents the current models used for the research on compilation and synthesis techniques of mVLSI biochips in a tutorial fashion; · Includes a set of "benchmarks", that are presented in great detail and includes the source code of several of the techniques p...

  9. Spike Neuromorphic VLSI-Based Bat Echolocation for Micro-Aerial Vehicle Guidance

    Science.gov (United States)

    2007-03-31

    IFinal 03/01/04 - 02/28/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Neuromorphic VLSI-based Bat Echolocation for Micro-aerial 5b.GRANTNUMBER Vehicle...uncovered interesting new issues in our choice for representing the intensity of signals. We have just finished testing the first chip version of an echo...timing-based algorithm (’openspace’) for sonar-guided navigation amidst multiple obstacles. 15. SUBJECT TERMS Neuromorphic VLSI, bat echolocation

  10. Parallel computation of nondeterministic algorithms in VLSI

    Energy Technology Data Exchange (ETDEWEB)

    Hortensius, P D

    1987-01-01

    This work examines parallel VLSI implementations of nondeterministic algorithms. It is demonstrated that conventional pseudorandom number generators are unsuitable for highly parallel applications. Efficient parallel pseudorandom sequence generation can be accomplished using certain classes of elementary one-dimensional cellular automata. The pseudorandom numbers appear in parallel on each clock cycle. Extensive study of the properties of these new pseudorandom number generators is made using standard empirical random number tests, cycle length tests, and implementation considerations. Furthermore, it is shown these particular cellular automata can form the basis of efficient VLSI architectures for computations involved in the Monte Carlo simulation of both the percolation and Ising models from statistical mechanics. Finally, a variation on a Built-In Self-Test technique based upon cellular automata is presented. These Cellular Automata-Logic-Block-Observation (CALBO) circuits improve upon conventional design for testability circuitry.

  11. Formal verification an essential toolkit for modern VLSI design

    CERN Document Server

    Seligman, Erik; Kumar, M V Achutha Kiran

    2015-01-01

    Formal Verification: An Essential Toolkit for Modern VLSI Design presents practical approaches for design and validation, with hands-on advice for working engineers integrating these techniques into their work. Building on a basic knowledge of System Verilog, this book demystifies FV and presents the practical applications that are bringing it into mainstream design and validation processes at Intel and other companies. The text prepares readers to effectively introduce FV in their organization and deploy FV techniques to increase design and validation productivity. Presents formal verific

  12. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  13. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  14. VLSI Architectures for the Multiplication of Integers Modulo a Fermat Number

    Science.gov (United States)

    Chang, J. J.; Truong, T. K.; Reed, I. S.; Hsu, I. S.

    1984-01-01

    Multiplication is central in the implementation of Fermat number transforms and other residue number algorithms. There is need for a good multiplication algorithm that can be realized easily on a very large scale integration (VLSI) chip. The Leibowitz multiplier is modified to realize multiplication in the ring of integers modulo a Fermat number. This new algorithm requires only a sequence of cyclic shifts and additions. The designs developed for this new multiplier are regular, simple, expandable, and, therefore, suitable for VLSI implementation.

  15. CMOS VLSI Active-Pixel Sensor for Tracking

    Science.gov (United States)

    Pain, Bedabrata; Sun, Chao; Yang, Guang; Heynssens, Julie

    2004-01-01

    An architecture for a proposed active-pixel sensor (APS) and a design to implement the architecture in a complementary metal oxide semiconductor (CMOS) very-large-scale integrated (VLSI) circuit provide for some advanced features that are expected to be especially desirable for tracking pointlike features of stars. The architecture would also make this APS suitable for robotic- vision and general pointing and tracking applications. CMOS imagers in general are well suited for pointing and tracking because they can be configured for random access to selected pixels and to provide readout from windows of interest within their fields of view. However, until now, the architectures of CMOS imagers have not supported multiwindow operation or low-noise data collection. Moreover, smearing and motion artifacts in collected images have made prior CMOS imagers unsuitable for tracking applications. The proposed CMOS imager (see figure) would include an array of 1,024 by 1,024 pixels containing high-performance photodiode-based APS circuitry. The pixel pitch would be 9 m. The operations of the pixel circuits would be sequenced and otherwise controlled by an on-chip timing and control block, which would enable the collection of image data, during a single frame period, from either the full frame (that is, all 1,024 1,024 pixels) or from within as many as 8 different arbitrarily placed windows as large as 8 by 8 pixels each. A typical prior CMOS APS operates in a row-at-a-time ( grolling-shutter h) readout mode, which gives rise to exposure skew. In contrast, the proposed APS would operate in a sample-first/readlater mode, suppressing rolling-shutter effects. In this mode, the analog readout signals from the pixels corresponding to the windows of the interest (which windows, in the star-tracking application, would presumably contain guide stars) would be sampled rapidly by routing them through a programmable diagonal switch array to an on-chip parallel analog memory array. The

  16. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  17. Synthesis of on-chip control circuits for mVLSI biochips

    DEFF Research Database (Denmark)

    Potluri, Seetal; Schneider, Alexander Rüdiger; Hørslev-Petersen, Martin

    2017-01-01

    them to laboratory environments. To address this issue, researchers have proposed methods to reduce the number of offchip pressure sources, through integration of on-chip pneumatic control logic circuits fabricated using three-layer monolithic membrane valve technology. Traditionally, mVLSI biochip......-chip control circuit design and (iii) the integration of on-chip control in the placement and routing design tasks. In this paper we present a design methodology for logic synthesis and physical synthesis of mVLSI biochips that use on-chip control. We show how the proposed methodology can be successfully...... applied to generate biochip layouts with integrated on-chip pneumatic control....

  18. Emerging Applications for High K Materials in VLSI Technology

    Science.gov (United States)

    Clark, Robert D.

    2014-01-01

    The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI) manufacturing for leading edge Dynamic Random Access Memory (DRAM) and Complementary Metal Oxide Semiconductor (CMOS) applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM) diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD) is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing. PMID:28788599

  19. Emerging Applications for High K Materials in VLSI Technology

    Directory of Open Access Journals (Sweden)

    Robert D. Clark

    2014-04-01

    Full Text Available The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI manufacturing for leading edge Dynamic Random Access Memory (DRAM and Complementary Metal Oxide Semiconductor (CMOS applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing.

  20. Harnessing VLSI System Design with EDA Tools

    CERN Document Server

    Kamat, Rajanish K; Gaikwad, Pawan K; Guhilot, Hansraj

    2012-01-01

    This book explores various dimensions of EDA technologies for achieving different goals in VLSI system design. Although the scope of EDA is very broad and comprises diversified hardware and software tools to accomplish different phases of VLSI system design, such as design, layout, simulation, testability, prototyping and implementation, this book focuses only on demystifying the code, a.k.a. firmware development and its implementation with FPGAs. Since there are a variety of languages for system design, this book covers various issues related to VHDL, Verilog and System C synergized with EDA tools, using a variety of case studies such as testability, verification and power consumption. * Covers aspects of VHDL, Verilog and Handel C in one text; * Enables designers to judge the appropriateness of each EDA tool for relevant applications; * Omits discussion of design platforms and focuses on design case studies; * Uses design case studies from diversified application domains such as network on chip, hospital on...

  1. Trace-based post-silicon validation for VLSI circuits

    CERN Document Server

    Liu, Xiao

    2014-01-01

    This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits.  The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective.  A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuit...

  2. A Knowledge Based Approach to VLSI CAD

    Science.gov (United States)

    1983-09-01

    Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to

  3. Advanced symbolic analysis for VLSI systems methods and applications

    CERN Document Server

    Shi, Guoyong; Tlelo Cuautle, Esteban

    2014-01-01

    This book provides comprehensive coverage of the recent advances in symbolic analysis techniques for design automation of nanometer VLSI systems. The presentation is organized in parts of fundamentals, basic implementation methods and applications for VLSI design. Topics emphasized include  statistical timing and crosstalk analysis, statistical and parallel analysis, performance bound analysis and behavioral modeling for analog integrated circuits . Among the recent advances, the Binary Decision Diagram (BDD) based approaches are studied in depth. The BDD-based hierarchical symbolic analysis approaches, have essentially broken the analog circuit size barrier. In particular, this book   • Provides an overview of classical symbolic analysis methods and a comprehensive presentation on the modern  BDD-based symbolic analysis techniques; • Describes detailed implementation strategies for BDD-based algorithms, including the principles of zero-suppression, variable ordering and canonical reduction; • Int...

  4. Point DCT VLSI Architecture for Emerging HEVC Standard

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmed

    2012-01-01

    Full Text Available This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4×4 up to 32×32, the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into sparse submatrices in order to reduce the multiplications. Finally, multiplications are completely eliminated using the lifting scheme. The proposed architecture sustains real-time processing of 1080P HD video codec running at 150 MHz.

  5. Digital VLSI design with Verilog a textbook from Silicon Valley Technical Institute

    CERN Document Server

    Williams, John

    2008-01-01

    This unique textbook is structured as a step-by-step course of study along the lines of a VLSI IC design project. In a nominal schedule of 12 weeks, two days and about 10 hours per week, the entire verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer - deserializer, including synthesizable PLLs. Digital VLSI Design With Verilog is all an engineer needs for in-depth understanding of the verilog language: Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided on the

  6. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  7. Numerical analysis of electromigration in thin film VLSI interconnections

    NARCIS (Netherlands)

    Petrescu, V.; Mouthaan, A.J.; Schoenmaker, W.; Angelescu, S.; Vissarion, R.; Dima, G.; Wallinga, Hans; Profirescu, M.D.

    1995-01-01

    Due to the continuing downscaling of the dimensions in VLSI circuits, electromigration is becoming a serious reliability hazard. A software tool based on finite element analysis has been developed to solve the two partial differential equations of the two particle vacancy/imperfection model.

  8. Heavy ion tests on programmable VLSI

    International Nuclear Information System (INIS)

    Provost-Grellier, A.

    1989-11-01

    The radiation from space environment induces operation damages in onboard computers systems. The definition of a strategy, for the Very Large Scale Integrated Circuitry (VLSI) qualification and choice, is needed. The 'upset' phenomena is known to be the most critical integrated circuit radiation effect. The strategies for testing integrated circuits are reviewed. A method and a test device were developed and applied to space applications candidate circuits. Cyclotron, synchrotron and Californium source experiments were carried out [fr

  9. The GLUEchip: A custom VLSI chip for detectors readout and associative memories circuits

    International Nuclear Information System (INIS)

    Amendolia, S.R.; Galeotti, S.; Morsani, F.; Passuello, D.; Ristori, L.; Turini, N.

    1993-01-01

    An associative memory full-custom VLSI chip for pattern recognition has been designed and tested in the past years. It's the AMchip, that contains 128 patterns of 60 bits each. To expand the pattern capacity of an Associative Memory bank, the custom VLSI GLUEchip has been developed. The GLUEchip allows the interconnection of up to 16 AMchips or up to 16 GLUEchips: the resulting tree-like structure works like a single AMchip with an output pipelined structure and a pattern capacity increased by a factor 16 for each GLUEchip used

  10. ALMA Array Operations Group process overview

    Science.gov (United States)

    Barrios, Emilio; Alarcon, Hector

    2016-07-01

    ALMA Science operations activities in Chile are responsibility of the Department of Science Operations, which consists of three groups, the Array Operations Group (AOG), the Program Management Group (PMG) and the Data Management Group (DMG). The AOG includes the Array Operators and have the mission to provide support for science observations, operating safely and efficiently the array. The poster describes the AOG process, management and operational tools.

  11. An engineering methodology for implementing and testing VLSI (Very Large Scale Integrated) circuits

    Science.gov (United States)

    Corliss, Walter F., II

    1989-03-01

    The engineering methodology for producing a fully tested VLSI chip from a design layout is presented. A 16-bit correlator, NPS CORN88, that was previously designed, was used as a vehicle to demonstrate this methodology. The study of the design and simulation tools, MAGIC and MOSSIM II, was the focus of the design and validation process. The design was then implemented and the chip was fabricated by MOSIS. This fabricated chip was then used to develop a testing methodology for using the digital test facilities at NPS. NPS CORN88 was the first full custom VLSI chip, designed at NPS, to be tested with the NPS digital analysis system, Tektronix DAS 9100 series tester. The capabilities and limitations of these test facilities are examined. NPS CORN88 test results are included to demonstrate the capabilities of the digital test system. A translator, MOS2DAS, was developed to convert the MOSSIM II simulation program to the input files required by the DAS 9100 device verification software, 91DVS. Finally, a tutorial for using the digital test facilities, including the DAS 9100 and associated support equipments, is included as an appendix.

  12. CAPCAL, 3-D Capacitance Calculator for VLSI Purposes

    International Nuclear Information System (INIS)

    Seidl, Albert; Klose, Helmut; Svoboda, Mildos

    2004-01-01

    1 - Description of program or function: CAPCAL is devoted to the calculation of capacitances of three-dimensional wiring configurations are typically used in VLSI circuits. Due to analogies in the mathematical description also conductance and heat transport problems can be treated by CAPCAL. To handle the problem using CAPCAL same approximations have to be applied to the structure under investigation: - the overall geometry has to be confined to a finite domain by using symmetry-properties of the problem - Non-rectangular structures have to be simplified into an artwork of multiple boxes. 2 - Method of solution: The electrical field is described by the Laplace-equation. The differential equation is discretized by using the finite difference method. NEA-1327/01: The linear equation system is solved by using a combined ADI-multigrid method. NEA-1327/04: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. NEA-1327/05: The linear equation system is solved by using a conjugate gradient method for CAPCAL V1.3. 3 - Restrictions on the complexity of the problem: NEA-1327/01: Certain restrictions of use may arise from the dimensioning of arrays. Field lengths are defined via PARAMETER-statements which can easily by modified. If the geometry of the problem is defined such that Neumann boundaries are dominating the convergence of the iterative equation system solver is affected

  13. Towards an Analogue Neuromorphic VLSI Instrument for the Sensing of Complex Odours

    Science.gov (United States)

    Ab Aziz, Muhammad Fazli; Harun, Fauzan Khairi Che; Covington, James A.; Gardner, Julian W.

    2011-09-01

    Almost all electronic nose instruments reported today employ pattern recognition algorithms written in software and run on digital processors, e.g. micro-processors, microcontrollers or FPGAs. Conversely, in this paper we describe the analogue VLSI implementation of an electronic nose through the design of a neuromorphic olfactory chip. The modelling, design and fabrication of the chip have already been reported. Here a smart interface has been designed and characterised for thisneuromorphic chip. Thus we can demonstrate the functionality of the a VLSI neuromorphic chip, producing differing principal neuron firing patterns to real sensor response data. Further work is directed towards integrating 9 separate neuromorphic chips to create a large neuronal network to solve more complex olfactory problems.

  14. Theory and applications of spherical microphone array processing

    CERN Document Server

    Jarrett, Daniel P; Naylor, Patrick A

    2017-01-01

    This book presents the signal processing algorithms that have been developed to process the signals acquired by a spherical microphone array. Spherical microphone arrays can be used to capture the sound field in three dimensions and have received significant interest from researchers and audio engineers. Algorithms for spherical array processing are different to corresponding algorithms already known in the literature of linear and planar arrays because the spherical geometry can be exploited to great beneficial effect. The authors aim to advance the field of spherical array processing by helping those new to the field to study it efficiently and from a single source, as well as by offering a way for more experienced researchers and engineers to consolidate their understanding, adding either or both of breadth and depth. The level of the presentation corresponds to graduate studies at MSc and PhD level. This book begins with a presentation of some of the essential mathematical and physical theory relevant to ...

  15. VLSI top-down design based on the separation of hierarchies

    NARCIS (Netherlands)

    Spaanenburg, L.; Broekema, A.; Leenstra, J.; Huys, C.

    1986-01-01

    Despite the presence of structure, interactions between the three views on VLSI design still lead to lengthy iterations. By separating the hierarchies for the respective views, the interactions are reduced. This separated hierarchy allows top-down design with functional abstractions as exemplified

  16. Development of Radhard VLSI electronics for SSC calorimeters

    International Nuclear Information System (INIS)

    Dawson, J.W.; Nodulman, L.J.

    1989-01-01

    A new program of development of integrated electronics for liquid argon calorimeters in the SSC detector environment is being started at Argonne National Laboratory. Scientists from Brookhaven National Laboratory and Vanderbilt University together with an industrial participants are expected to collaborate in this work. Interaction rates, segmentation, and the radiation environment dictate that front-end electronics of SSC calorimeters must be implemented in the form of highly integrated, radhard, analog, low noise, VLSI custom monolithic devices. Important considerations are power dissipation, choice of functions integrated on the front-end chips, and cabling requirements. An extensive level of expertise in radhard electronics exists within the industrial community, and a primary objective of this work is to bring that expertise to bear on the problems of SSC detector design. Radiation hardness measurements and requirements as well as calorimeter design will be primarily the responsibility of Argonne scientists and our Brookhaven and Vanderbilt colleagues. Radhard VLSI design and fabrication will be primarily the industrial participant's responsibility. The rapid-cycling synchrotron at Argonne will be used for radiation damage studies involving response to neutrons and charged particles, while damage from gammas will be investigated at Brookhaven. 10 refs., 6 figs., 2 tabs

  17. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  18. Systolic pocessing and an implementation for signal and image processing

    Energy Technology Data Exchange (ETDEWEB)

    Kulkarni, A.V.; Yen, D.W.L.

    1982-10-01

    Many signal and image processing applications impose a severe demand on the I/O bandwidth and computation power of general-purpose computers. The systolic concept offers guidelines in building cost-effective systems that balance I/O with computation. The resulting simplicity and regularity of such systems leads to modular designs suitable for VLSI implementation. The authors describe a linear systolic array capable of evaluating a large class of inner-product functions used in signal and image processing. These include matrix multiplications, multidimensional convolutions using fixed or time-varying kernels, as well as various nonlinear functions of vectors. The system organization of a working prototype is also described. 11 references.

  19. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    Science.gov (United States)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  20. VLSI architectures for modern error-correcting codes

    CERN Document Server

    Zhang, Xinmiao

    2015-01-01

    Error-correcting codes are ubiquitous. They are adopted in almost every modern digital communication and storage system, such as wireless communications, optical communications, Flash memories, computer hard drives, sensor networks, and deep-space probing. New-generation and emerging applications demand codes with better error-correcting capability. On the other hand, the design and implementation of those high-gain error-correcting codes pose many challenges. They usually involve complex mathematical computations, and mapping them directly to hardware often leads to very high complexity. VLSI

  1. VLSI Design of Trusted Virtual Sensors

    Directory of Open Access Journals (Sweden)

    Macarena C. Martínez-Rodríguez

    2018-01-01

    Full Text Available This work presents a Very Large Scale Integration (VLSI design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF based on a Static Random Access Memory (SRAM to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time.

  2. VLSI Design of Trusted Virtual Sensors.

    Science.gov (United States)

    Martínez-Rodríguez, Macarena C; Prada-Delgado, Miguel A; Brox, Piedad; Baturone, Iluminada

    2018-01-25

    This work presents a Very Large Scale Integration (VLSI) design of trusted virtual sensors providing a minimum unitary cost and very good figures of size, speed and power consumption. The sensed variable is estimated by a virtual sensor based on a configurable and programmable PieceWise-Affine hyper-Rectangular (PWAR) model. An algorithm is presented to find the best values of the programmable parameters given a set of (empirical or simulated) input-output data. The VLSI design of the trusted virtual sensor uses the fast authenticated encryption algorithm, AEGIS, to ensure the integrity of the provided virtual measurement and to encrypt it, and a Physical Unclonable Function (PUF) based on a Static Random Access Memory (SRAM) to ensure the integrity of the sensor itself. Implementation results of a prototype designed in a 90-nm Complementary Metal Oxide Semiconductor (CMOS) technology show that the active silicon area of the trusted virtual sensor is 0.86 mm 2 and its power consumption when trusted sensing at 50 MHz is 7.12 mW. The maximum operation frequency is 85 MHz, which allows response times lower than 0.25 μ s. As application example, the designed prototype was programmed to estimate the yaw rate in a vehicle, obtaining root mean square errors lower than 1.1%. Experimental results of the employed PUF show the robustness of the trusted sensing against aging and variations of the operation conditions, namely, temperature and power supply voltage (final value as well as ramp-up time).

  3. Model-based processing for underwater acoustic arrays

    CERN Document Server

    Sullivan, Edmund J

    2015-01-01

    This monograph presents a unified approach to model-based processing for underwater acoustic arrays. The use of physical models in passive array processing is not a new idea, but it has been used on a case-by-case basis, and as such, lacks any unifying structure. This work views all such processing methods as estimation procedures, which then can be unified by treating them all as a form of joint estimation based on a Kalman-type recursive processor, which can be recursive either in space or time, depending on the application. This is done for three reasons. First, the Kalman filter provides a natural framework for the inclusion of physical models in a processing scheme. Second, it allows poorly known model parameters to be jointly estimated along with the quantities of interest. This is important, since in certain areas of array processing already in use, such as those based on matched-field processing, the so-called mismatch problem either degrades performance or, indeed, prevents any solution at all. Third...

  4. Improvement of CMOS VLSI rad tolerance by processing technics

    International Nuclear Information System (INIS)

    Guyomard, D.; Desoutter, I.

    1986-01-01

    The following study concerns the development of integrated circuits for fields requiring only relatively low radiation tolerance levels, and especially for the civil spatial district area. Process modifications constitute our basic study. They have been carried into effects. Our work and main results are reported in this paper. Well known 2.5 and 3 μm CMOS technologies are under our concern. A first set of modifications enables us to double the cumulative dose tolerance of a 4 Kbit SRAM, keeping at the same time the same kind of damage. We obtain memories which tolerate radiation doses as high as 16 KRad(Si). Repetitivity of the results, linked to the quality assurance of this specific circuit, is reported here. A second set of modifications concerns the processing of gate array. In particular, the choice of the silicon substrate type, (epitaxy substrate), is under investigation. On the other hand, a complete study of a test vehicule allows us to accurately measure the rad tolerance of various components of the Cell library [fr

  5. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam

    2011-12-01

    Power dissipation poses a great challenge for VLSI designers. With the intense down-scaling of technology, the total power consumption of the chip is made up primarily of leakage power dissipation. This paper proposes combining a custom-designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result of implementing this novel power gating technique, a standby leakage power reduction of 99% and energy savings of 33.3% are achieved. Finally the possible effects of surge currents and ground bounce noise are studied. These findings allow longer operation times for battery-operated systems characterized by long standby periods. © 2011 IEEE.

  6. Macrocell Builder: IP-Block-Based Design Environment for High-Throughput VLSI Dedicated Digital Signal Processing Systems

    Directory of Open Access Journals (Sweden)

    Urard Pascal

    2006-01-01

    Full Text Available We propose an efficient IP-block-based design environment for high-throughput VLSI systems. The flow generates SystemC register-transfer-level (RTL architecture, starting from a Matlab functional model described as a netlist of functional IP. The refinement model inserts automatically control structures to manage delays induced by the use of RTL IPs. It also inserts a control structure to coordinate the execution of parallel clocked IP. The delays may be managed by registers or by counters included in the control structure. The flow has been used successfully in three real-world DSP systems. The experimentations show that the approach can produce efficient RTL architecture and allows to save huge amount of time.

  7. Efficient processing of two-dimensional arrays with C or C++

    Science.gov (United States)

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  8. Experimental investigation of the ribbon-array ablation process

    International Nuclear Information System (INIS)

    Li Zhenghong; Xu Rongkun; Chu Yanyun; Yang Jianlun; Xu Zeping; Ye Fan; Chen Faxin; Xue Feibiao; Ning Jiamin; Qin Yi; Meng Shijian; Hu Qingyuan; Si Fenni; Feng Jinghua; Zhang Faqiang; Chen Jinchuan; Li Linbo; Chen Dingyang; Ding Ning; Zhou Xiuwen

    2013-01-01

    Ablation processes of ribbon-array loads, as well as wire-array loads for comparison, were investigated on Qiangguang-1 accelerator. The ultraviolet framing images indicate that the ribbon-array loads have stable passages of currents, which produce axially uniform ablated plasma. The end-on x-ray framing camera observed the azimuthally modulated distribution of the early ablated ribbon-array plasma and the shrink process of the x-ray radiation region. Magnetic probes measured the total and precursor currents of ribbon-array and wire-array loads, and there exists no evident difference between the precursor currents of the two types of loads. The proportion of the precursor current to the total current is 15% to 20%, and the start time of the precursor current is about 25 ns later than that of the total current. The melting time of the load material is about 16 ns, when the inward drift velocity of the ablated plasma is taken to be 1.5 × 10 7 cm/s.

  9. Processors and systems (picture processing)

    Energy Technology Data Exchange (ETDEWEB)

    Gemmar, P

    1983-01-01

    Automatic picture processing requires high performance computers and high transmission capacities in the processor units. The author examines the possibilities of operating processors in parallel in order to accelerate the processing of pictures. He therefore discusses a number of available processors and systems for picture processing and illustrates their capacities for special types of picture processing. He stresses the fact that the amount of storage required for picture processing is exceptionally high. The author concludes that it is as yet difficult to decide whether very large groups of simple processors or highly complex multiprocessor systems will provide the best solution. Both methods will be aided by the development of VLSI. New solutions have already been offered (systolic arrays and 3-d processing structures) but they also are subject to losses caused by inherently parallel algorithms. Greater efforts must be made to produce suitable software for multiprocessor systems. Some possibilities for future picture processing systems are discussed. 33 references.

  10. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  11. Integrating Scientific Array Processing into Standard SQL

    Science.gov (United States)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  12. Development methods for VLSI-processors

    International Nuclear Information System (INIS)

    Horninger, K.; Sandweg, G.

    1982-01-01

    The aim of this project, which was originally planed for 3 years, was the development of modern system and circuit concepts, for VLSI-processors having a 32 bit wide data path. The result of this first years work is the concept of a general purpose processor. This processor is not only logically but also physically (on the chip) divided into four functional units: a microprogrammable instruction unit, an execution unit in slice technique, a fully associative cache memory and an I/O unit. For the ALU of the execution unit circuits in PLA and slice techniques have been realized. On the basis of regularity, area consumption and achievable performance the slice technique has been prefered. The designs utilize selftesting circuitry. (orig.) [de

  13. Design of 10Gbps optical encoder/decoder structure for FE-OCDMA system using SOA and opto-VLSI processors.

    Science.gov (United States)

    Aljada, Muhsen; Hwang, Seow; Alameh, Kamal

    2008-01-21

    In this paper we propose and experimentally demonstrate a reconfigurable 10Gbps frequency-encoded (1D) encoder/decoder structure for optical code division multiple access (OCDMA). The encoder is constructed using a single semiconductor optical amplifier (SOA) and 1D reflective Opto-VLSI processor. The SOA generates broadband amplified spontaneous emission that is dynamically sliced using digital phase holograms loaded onto the Opto-VLSI processor to generate 1D codewords. The selected wavelengths are injected back into the same SOA for amplifications. The decoder is constructed using single Opto-VLSI processor only. The encoded signal can successfully be retrieved at the decoder side only when the digital phase holograms of the encoder and the decoder are matched. The system performance is measured in terms of the auto-correlation and cross-correlation functions as well as the eye diagram.

  14. A neuromorphic VLSI device for implementing 2-D selective attention systems.

    Science.gov (United States)

    Indiveri, G

    2001-01-01

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.

  15. Assimilation of Biophysical Neuronal Dynamics in Neuromorphic VLSI.

    Science.gov (United States)

    Wang, Jun; Breen, Daniel; Akinin, Abraham; Broccard, Frederic; Abarbanel, Henry D I; Cauwenberghs, Gert

    2017-12-01

    Representing the biophysics of neuronal dynamics and behavior offers a principled analysis-by-synthesis approach toward understanding mechanisms of nervous system functions. We report on a set of procedures assimilating and emulating neurobiological data on a neuromorphic very large scale integrated (VLSI) circuit. The analog VLSI chip, NeuroDyn, features 384 digitally programmable parameters specifying for 4 generalized Hodgkin-Huxley neurons coupled through 12 conductance-based chemical synapses. The parameters also describe reversal potentials, maximal conductances, and spline regressed kinetic functions for ion channel gating variables. In one set of experiments, we assimilated membrane potential recorded from one of the neurons on the chip to the model structure upon which NeuroDyn was designed using the known current input sequence. We arrived at the programmed parameters except for model errors due to analog imperfections in the chip fabrication. In a related set of experiments, we replicated songbird individual neuron dynamics on NeuroDyn by estimating and configuring parameters extracted using data assimilation from intracellular neural recordings. Faithful emulation of detailed biophysical neural dynamics will enable the use of NeuroDyn as a tool to probe electrical and molecular properties of functional neural circuits. Neuroscience applications include studying the relationship between molecular properties of neurons and the emergence of different spike patterns or different brain behaviors. Clinical applications include studying and predicting effects of neuromodulators or neurodegenerative diseases on ion channel kinetics.

  16. Power gating of VLSI circuits using MEMS switches in low power applications

    KAUST Repository

    Shobak, Hosam; Ghoneim, Mohamed T.; El Boghdady, Nawal; Halawa, Sarah; Iskander, Sophinese M.; Anis, Mohab H.

    2011-01-01

    -designed MEMS switch to power gate VLSI circuits, such that leakage power is efficiently reduced while accounting for performance and reliability. The designed MEMS switch is characterized by an 0.1876 ? ON resistance and requires 4.5 V to switch. As a result

  17. Drift chamber tracking with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-10-01

    We have tested a commercial analog VLSI neural network chip for finding in real time the intercept and slope of charged particles traversing a drift chamber. Voltages proportional to the drift times were input to the Intel ETANN chip and the outputs were recorded and later compared off line to conventional track fits. We will discuss the chamber and test setup, the chip specifications, and results of recent tests. We'll briefly discuss possible applications in high energy physics detector triggers

  18. High-energy heavy ion testing of VLSI devices for single event ...

    Indian Academy of Sciences (India)

    Unknown

    per describes the high-energy heavy ion radiation testing of VLSI devices for single event upset (SEU) ... The experimental set up employed to produce low flux of heavy ions viz. silicon ... through which they pass, leaving behind a wake of elec- ... for use in Bus Management Unit (BMU) and bulk CMOS ... was scheduled.

  19. FILTRES: a 128 channels VLSI mixed front-end readout electronic development for microstrip detectors

    International Nuclear Information System (INIS)

    Anstotz, F.; Hu, Y.; Michel, J.; Sohler, J.L.; Lachartre, D.

    1998-01-01

    We present a VLSI digital-analog readout electronic chain for silicon microstrip detectors. The characteristics of this circuit have been optimized for the high resolution tracker of the CERN CMS experiment. This chip consists of 128 channels at 50 μm pitch. Each channel is composed by a charge amplifier, a CR-RC shaper, an analog memory, an analog processor, an output FIFO read out serially by a multiplexer. This chip has been processed in the radiation hard technology DMILL. This paper describes the architecture of the circuit and presents test results of the 128 channel full chain chip. (orig.)

  20. The AMchip: A VLSI associative memory for track finding

    International Nuclear Information System (INIS)

    Morsani, F.; Galeotti, S.; Passuello, D.; Amendolia, S.R.; Ristori, L.; Turini, N.

    1992-01-01

    An associative memory to be used for super-fast track finding in future high energy physics experiments, has been implemented on silicon as a full-custom CMOS VLSI chip (the AMchip). The first prototype has been designed and successfully tested at INFN in Pisa. It is implemented in 1.6 μm, double metal, silicon gate CMOS technology and contains about 140 000 MOS transistors on a 1x1 cm 2 silicon chip. (orig.)

  1. Point DCT VLSI Architecture for Emerging HEVC Standard

    OpenAIRE

    Ahmed, Ashfaq; Shahid, Muhammad Usman; Rehman, Ata ur

    2012-01-01

    This work presents a flexible VLSI architecture to compute the -point DCT. Since HEVC supports different block sizes for the computation of the DCT, that is, 4 × 4 up to 3 2 × 3 2 , the design of a flexible architecture to support them helps reducing the area overhead of hardware implementations. The hardware proposed in this work is partially folded to save area and to get speed for large video sequences sizes. The proposed architecture relies on the decomposition of the DCT matrices into ...

  2. VLSI architecture and design for the Fermat Number Transform implementation

    Energy Technology Data Exchange (ETDEWEB)

    Pajayakrit, A.

    1987-01-01

    A new technique of sectioning a pipelined transformer, using the Fermat Number Transform (FNT), is introduced. Also, a novel VLSI design which overcomes the problems of implementing FNTs, for use in fast convolution/correlation, is described. The design comprises one complete section of a pipelined transformer and may be programmed to function at any point in a forward or inverse pipeline, so allowing the construction of a pipelined convolver or correlator using identical chips, thus the favorable properties of the transform can be exploited. This overcomes the difficulty of fitting a complete pipeline onto one chip without resorting to the use of several different designs. The implementation of high-speed convolver/correlator using the VLSI chips has been successfully developed and tested. For impulse response lengths of up to 16 points the sampling rates of 0.5 MHz can be achieved. Finally, the filter speed performance using the FNT chips is compared to other designs and conclusions drawn on the merits of the FNT for this application. Also, the advantages and limitations of the FNT are analyzed, with respect to the more conventional FFT, and the results are provided.

  3. Using Software Technology to Specify Abstract Interfaces in VLSI Design.

    Science.gov (United States)

    1985-01-01

    with the complexity lev- els inherent in VLSI design, in that they can capitalize on their foundations in discrete mathemat- ics and the theory of...basis, rather than globally. Such a partitioning of module semantics makes the specification easier to construct and verify intelectual !y; it also...access function definitions. A standard language improves executability characteristics by capitalizing on portable, optimized system software developed

  4. Oxide nano-rod array structure via a simple metallurgical process

    International Nuclear Information System (INIS)

    Nanko, M; Do, D T M

    2011-01-01

    A simple method for fabricating oxide nano-rod array structure via metallurgical process is reported. Some dilute alloys such as Ni(Al) solid solution shows internal oxidation with rod-like oxide precipices during high-temperature oxidation with low oxygen partial pressure. By removing a metal part in internal oxidation zone, oxide nano-rod array structure can be developed on the surface of metallic components. In this report, Al 2 O 3 or NiAl 2 O 4 nano-rod array structures were prepared by using Ni(Al) solid solution. Effects of Cr addition into Ni(Al) solid solution on internal oxidation were also reported. Pack cementation process for aluminizing of Ni surface was applied to prepare nano-rod array components with desired shape. Near-net shape Ni components with oxide nano-rod array structure on their surface can be prepared by using the pack cementation process and internal oxidation,

  5. Removing Background Noise with Phased Array Signal Processing

    Science.gov (United States)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  6. An SEU analysis approach for error propagation in digital VLSI CMOS ASICs

    International Nuclear Information System (INIS)

    Baze, M.P.; Bartholet, W.G.; Dao, T.A.; Buchner, S.

    1995-01-01

    A critical issue in the development of ASIC designs is the ability to achieve first pass fabrication success. Unsuccessful fabrication runs have serious impact on ASIC costs and schedules. The ability to predict an ASICs radiation response prior to fabrication is therefore a key issue when designing ASICs for military and aerospace systems. This paper describes an analysis approach for calculating static bit error propagation in synchronous VLSI CMOS circuits developed as an aid for predicting the SEU response of ASIC's. The technique is intended for eventual application as an ASIC development simulation tool which can be used by circuit design engineers for performance evaluation during the pre-fabrication design process in much the same way that logic and timing simulators are used

  7. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  8. VLSI Technology for Cognitive Radio

    Science.gov (United States)

    VIJAYALAKSHMI, B.; SIDDAIAH, P.

    2017-08-01

    One of the most challenging tasks of cognitive radio is the efficiency in the spectrum sensing scheme to overcome the spectrum scarcity problem. The popular and widely used spectrum sensing technique is the energy detection scheme as it is very simple and doesn’t require any previous information related to the signal. We propose one such approach which is an optimised spectrum sensing scheme with reduced filter structure. The optimisation is done in terms of area and power performance of the spectrum. The simulations of the VLSI structure of the optimised flexible spectrum is done using verilog coding by using the XILINX ISE software. Our method produces performance with 13% reduction in area and 66% reduction in power consumption in comparison to the flexible spectrum sensing scheme. All the results are tabulated and comparisons are made. A new scheme for optimised and effective spectrum sensing opens up with our model.

  9. VLSI Implementation of a Fixed-Complexity Soft-Output MIMO Detector for High-Speed Wireless

    Directory of Open Access Journals (Sweden)

    Di Wu

    2010-01-01

    Full Text Available This paper presents a low-complexity MIMO symbol detector with close-Maximum a posteriori performance for the emerging multiantenna enhanced high-speed wireless communications. The VLSI implementation is based on a novel MIMO detection algorithm called Modified Fixed-Complexity Soft-Output (MFCSO detection, which achieves a good trade-off between performance and implementation cost compared to the referenced prior art. By including a microcode-controlled channel preprocessing unit and a pipelined detection unit, it is flexible enough to cover several different standards and transmission schemes. The flexibility allows adaptive detection to minimize power consumption without degradation in throughput. The VLSI implementation of the detector is presented to show that real-time MIMO symbol detection of 20 MHz bandwidth 3GPP LTE and 10 MHz WiMAX downlink physical channel is achievable at reasonable silicon cost.

  10. Studies of implosion processes of nested tungsten wire-array Z-pinch

    International Nuclear Information System (INIS)

    Ning Cheng; Ding Ning; Liu Quan; Yang Zhenhua

    2006-01-01

    Nested wire-array is a kind of promising structured-load because it can improve the quality of Z-pinch plasma and enhance the radiation power of X-ray source. Based on the zero-dimensional model, the assumption of wire-array collision, and the criterion of optimized load (maximal load kinetic energy), optimization of the typical nested wire-array as a load of Z machine at Sandia Laboratory was carried out. It was shown that the load has been basically optimized. The Z-pinch process of the typical load was numerically studied by means of one-dimensional three-temperature radiation magneto-hydrodynamics (RMHD) code. The obtained results reproduce the dynamic process of the Z-pinch and show the implosion trajectory of nested wire-array and the transfer process of drive current between the inner and outer array. The experimental and computational X-ray pulse was compared, and it was suggested that the assumption of wire-array collision was reasonable in nested wire-array Z-pinch at least for the current level of Z machine. (authors)

  11. A multi coding technique to reduce transition activity in VLSI circuits

    International Nuclear Information System (INIS)

    Vithyalakshmi, N.; Rajaram, M.

    2014-01-01

    Advances in VLSI technology have enabled the implementation of complex digital circuits in a single chip, reducing system size and power consumption. In deep submicron low power CMOS VLSI design, the main cause of energy dissipation is charging and discharging of internal node capacitances due to transition activity. Transition activity is one of the major factors that also affect the dynamic power dissipation. This paper proposes power reduction analyzed through algorithm and logic circuit levels. In algorithm level the key aspect of reducing power dissipation is by minimizing transition activity and is achieved by introducing a data coding technique. So a novel multi coding technique is introduced to improve the efficiency of transition activity up to 52.3% on the bus lines, which will automatically reduce the dynamic power dissipation. In addition, 1 bit full adders are introduced in the Hamming distance estimator block, which reduces the device count. This coding method is implemented using Verilog HDL. The overall performance is analyzed by using Modelsim and Xilinx Tools. In total 38.2% power saving capability is achieved compared to other existing methods. (semiconductor technology)

  12. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    International Nuclear Information System (INIS)

    Jian Haifang; Shi Yin

    2009-01-01

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  13. VLSI architecture of a K-best detector for MIMO-OFDM wireless communication systems

    Energy Technology Data Exchange (ETDEWEB)

    Jian Haifang; Shi Yin, E-mail: jhf@semi.ac.c [Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083 (China)

    2009-07-15

    The K-best detector is considered as a promising technique in the MIMO-OFDM detection because of its good performance and low complexity. In this paper, a new K-best VLSI architecture is presented. In the proposed architecture, the metric computation units (MCUs) expand each surviving path only to its partial branches, based on the novel expansion scheme, which can predetermine the branches' ascending order by their local distances. Then a distributed sorter sorts out the new K surviving paths from the expanded branches in pipelines. Compared to the conventional K-best scheme, the proposed architecture can approximately reduce fundamental operations by 50% and 75% for the 16-QAM and the 64-QAM cases, respectively, and, consequently, lower the demand on the hardware resource significantly. Simulation results prove that the proposed architecture can achieve a performance very similar to conventional K-best detectors. Hence, it is an efficient solution to the K-best detector's VLSI implementation for high-throughput MIMO-OFDM systems.

  14. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  15. Array signal processing in the NASA Deep Space Network

    Science.gov (United States)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  16. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  17. A pipelined architecture for real time correction of non-uniformity in infrared focal plane arrays imaging system using multiprocessors

    Science.gov (United States)

    Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan

    2010-07-01

    This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.

  18. DPL/Daedalus design environment (for VLSI)

    Energy Technology Data Exchange (ETDEWEB)

    Batali, J; Mayle, N; Shrobe, H; Sussman, G; Weise, D

    1981-01-01

    The DPL/Daedalus design environment is an interactive VLSI design system implemented at the MIT Artificial Intelligence Laboratory. The system consists of several components: a layout language called DPL (for design procedure language); an interactive graphics facility (Daedalus); and several special purpose design procedures for constructing complex artifacts such as PLAs and microprocessor data paths. Coordinating all of these is a generalized property list data base which contains both the data representing circuits and the procedures for constructing them. The authors first review the nature of the data base and then turn to DPL and Daedalus, the two most common ways of entering information into the data base. The next two sections review the specialized procedures for constructing PLAs and data paths; the final section describes a tool for hierarchical node extraction. 5 references.

  19. Supercomputers and parallel computation. Based on the proceedings of a workshop on progress in the use of vector and array processors organised by the Institute of Mathematics and its Applications and held in Bristol, 2-3 September 1982

    International Nuclear Information System (INIS)

    Paddon, D.J.

    1984-01-01

    This book is based on the proceedings of a conference on parallel computing held in 1982. There are 18 papers which cover the following topics: VLSI parallel architectures, the theory of parallel computing and vector and array processor computing. One paper on 'Tough Problems in Reactor Design' is indexed separately. All the contributions are on research done in the United Kingdom. Although much of the experience in array processor computing is associated with the ICL distributed array processor (DAP) and this is reflected in the contributions, the research relating to the ICL DAP is relevant to all types of array processors. (UK)

  20. An area-efficient path memory structure for VLSI Implementation of high speed Viterbi decoders

    DEFF Research Database (Denmark)

    Paaske, Erik; Pedersen, Steen; Sparsø, Jens

    1991-01-01

    Path storage and selection methods for Viterbi decoders are investigated with special emphasis on VLSI implementations. Two well-known algorithms, the register exchange, algorithm, REA, and the trace back algorithm, TBA, are considered. The REA requires the smallest number of storage elements...

  1. Application of optical processing to adaptive phased array radar

    Science.gov (United States)

    Carroll, C. W.; Vijaya Kumar, B. V. K.

    1988-01-01

    The results of the investigation of the applicability of optical processing to Adaptive Phased Array Radar (APAR) data processing will be summarized. Subjects that are covered include: (1) new iterative Fourier transform based technique to determine the array antenna weight vector such that the resulting antenna pattern has nulls at desired locations; (2) obtaining the solution of the optimal Wiener weight vector by both iterative and direct methods on two laboratory Optical Linear Algebra Processing (OLAP) systems; and (3) an investigation of the effects of errors present in OLAP systems on the solution vectors.

  2. First results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Anzivino, G.; Horisberger, R.; Hubbeling, L.; Hyams, B.; Parker, S.; Breakstone, A.; Litke, A.M.; Walker, J.T.; Bingefors, N.

    1986-01-01

    A 256-strip silicon detector with 25 μm strip pitch, connected to two 128-channel NMOS VLSI chips (Microplex), has been tested using straight-through tracks from a ruthenium beta source. The readout channels have a pitch of 47.5 μm. A single multiplexed output provides voltages proportional to the integrated charge from each strip. The most probable signal height from the beta traversals is approximately 14 times the rms noise in any single channel. (orig.)

  3. Monolithic active pixel sensors (MAPS) in a VLSI CMOS technology

    CERN Document Server

    Turchetta, R; Manolopoulos, S; Tyndel, M; Allport, P P; Bates, R; O'Shea, V; Hall, G; Raymond, M

    2003-01-01

    Monolithic Active Pixel Sensors (MAPS) designed in a standard VLSI CMOS technology have recently been proposed as a compact pixel detector for the detection of high-energy charged particle in vertex/tracking applications. MAPS, also named CMOS sensors, are already extensively used in visible light applications. With respect to other competing imaging technologies, CMOS sensors have several potential advantages in terms of low cost, low power, lower noise at higher speed, random access of pixels which allows windowing of region of interest, ability to integrate several functions on the same chip. This brings altogether to the concept of 'camera-on-a-chip'. In this paper, we review the use of CMOS sensors for particle physics and we analyse their performances in term of the efficiency (fill factor), signal generation, noise, readout speed and sensor area. In most of high-energy physics applications, data reduction is needed in the sensor at an early stage of the data processing before transfer of the data to ta...

  4. VLSI System Implementation of 200 MHz, 8-bit, 90nm CMOS Arithmetic and Logic Unit (ALU Processor Controller

    Directory of Open Access Journals (Sweden)

    Fazal NOORBASHA

    2012-08-01

    Full Text Available In this present study includes the Very Large Scale Integration (VLSI system implementation of 200MHz, 8-bit, 90nm Complementary Metal Oxide Semiconductor (CMOS Arithmetic and Logic Unit (ALU processor control with logic gate design style and 0.12µm six metal 90nm CMOS fabrication technology. The system blocks and the behaviour are defined and the logical design is implemented in gate level in the design phase. Then, the logic circuits are simulated and the subunits are converted in to 90nm CMOS layout. Finally, in order to construct the VLSI system these units are placed in the floor plan and simulated with analog and digital, logic and switch level simulators. The results of the simulations indicates that the VLSI system can control different instructions which can divided into sub groups: transfer instructions, arithmetic and logic instructions, rotate and shift instructions, branch instructions, input/output instructions, control instructions. The data bus of the system is 16-bit. It runs at 200MHz, and operating power is 1.2V. In this paper, the parametric analysis of the system, the design steps and obtained results are explained.

  5. High performance VLSI telemetry data systems

    Science.gov (United States)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  6. A Versatile Multichannel Digital Signal Processing Module for Microcalorimeter Arrays

    Science.gov (United States)

    Tan, H.; Collins, J. W.; Walby, M.; Hennig, W.; Warburton, W. K.; Grudberg, P.

    2012-06-01

    Different techniques have been developed for reading out microcalorimeter sensor arrays: individual outputs for small arrays, and time-division or frequency-division or code-division multiplexing for large arrays. Typically, raw waveform data are first read out from the arrays using one of these techniques and then stored on computer hard drives for offline optimum filtering, leading not only to requirements for large storage space but also limitations on achievable count rate. Thus, a read-out module that is capable of processing microcalorimeter signals in real time will be highly desirable. We have developed multichannel digital signal processing electronics that are capable of on-board, real time processing of microcalorimeter sensor signals from multiplexed or individual pixel arrays. It is a 3U PXI module consisting of a standardized core processor board and a set of daughter boards. Each daughter board is designed to interface a specific type of microcalorimeter array to the core processor. The combination of the standardized core plus this set of easily designed and modified daughter boards results in a versatile data acquisition module that not only can easily expand to future detector systems, but is also low cost. In this paper, we first present the core processor/daughter board architecture, and then report the performance of an 8-channel daughter board, which digitizes individual pixel outputs at 1 MSPS with 16-bit precision. We will also introduce a time-division multiplexing type daughter board, which takes in time-division multiplexing signals through fiber-optic cables and then processes the digital signals to generate energy spectra in real time.

  7. Generic nano-imprint process for fabrication of nanowire arrays

    Energy Technology Data Exchange (ETDEWEB)

    Pierret, Aurelie; Hocevar, Moira; Algra, Rienk E; Timmering, Eugene C; Verschuuren, Marc A; Immink, George W G; Verheijen, Marcel A; Bakkers, Erik P A M [Philips Research Laboratories Eindhoven, High Tech Campus 11, 5656 AE Eindhoven (Netherlands); Diedenhofen, Silke L [FOM Institute for Atomic and Molecular Physics c/o Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Vlieg, E, E-mail: e.p.a.m.bakkers@tue.nl [IMM, Solid State Chemistry, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2010-02-10

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 deg. C for InP and 700 deg. C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  8. Generic nano-imprint process for fabrication of nanowire arrays

    NARCIS (Netherlands)

    Pierret, A.; Hocevar, M.; Diedenhofen, S.L.; Algra, R.E.; Vlieg, E.; Timmering, E.C.; Verschuuren, M.A.; Immink, W.G.G.; Verheijen, M.A.; Bakkers, E.P.A.M.

    2010-01-01

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2inch substrates. After lift-off organic residues remain on the surface, which induce the growth of

  9. An area-efficient topology for VLSI implementation of Viterbi decoders and other shuffle-exchange type structures

    DEFF Research Database (Denmark)

    Sparsø, Jens; Jørgensen, Henrik Nordtorp; Paaske, Erik

    1991-01-01

    A topology for single-chip implementation of computing structures based on shuffle-exchange (SE)-type interconnection networks is presented. The topology is suited for structures with a small number of processing elements (i.e. 32-128) whose area cannot be neglected compared to the area required....... The topology has been used in a VLSI implementation of the add-compare-select (ACS) module of a fully parallel K=7, R=1/2 Viterbi decoder. Both the floor-planning issues and some of the important algorithm and circuit-level aspects of this design are discussed. The chip has been designed and fabricated in a 2....... The interconnection network occupies 32% of the area.>...

  10. Multi-net optimization of VLSI interconnect

    CERN Document Server

    Moiseev, Konstantin; Wimer, Shmuel

    2015-01-01

    This book covers layout design and layout migration methodologies for optimizing multi-net wire structures in advanced VLSI interconnects. Scaling-dependent models for interconnect power, interconnect delay and crosstalk noise are covered in depth, and several design optimization problems are addressed, such as minimization of interconnect power under delay constraints, or design for minimal delay in wire bundles within a given routing area. A handy reference or a guide for design methodologies and layout automation techniques, this book provides a foundation for physical design challenges of interconnect in advanced integrated circuits.  • Describes the evolution of interconnect scaling and provides new techniques for layout migration and optimization, focusing on multi-net optimization; • Presents research results that provide a level of design optimization which does not exist in commercially-available design automation software tools; • Includes mathematical properties and conditions for optimal...

  11. Physico-topological methods of increasing stability of the VLSI circuit components to irradiation. Fiziko-topologhicheskie sposoby uluchsheniya radiatsionnoj stojkosti komponentov BIS

    Energy Technology Data Exchange (ETDEWEB)

    Pereshenkov, V S [MIFI, Moscow, (Russian Federation); Shishianu, F S; Rusanovskij, V I [S. Lazo KPI, Chisinau, (Moldova, Republic of)

    1992-01-01

    The paper presents the method used and the experimental results obtained for 8-bit microprocessor irradiated with [gamma]-rays and neutrons. The correlation between the electrical and technological parameters with the irradiation ones is revealed. The influence of leakage current between devices incorporated in VLSI circuits was studied. The obtained results create the possibility to determine the technological parameters necessary for designing the circuit able to work at predetermined doses. The necessary substrate doping concentration for isolation which eliminates the leakage current between devices prevents the VLSI circuit break down was determined. (Author).

  12. International Conference on VLSI, Communication, Advanced Devices, Signals & Systems and Networking

    CERN Document Server

    Shirur, Yasha; Prasad, Rekha

    2013-01-01

    This book is a collection of papers presented by renowned researchers, keynote speakers and academicians in the International Conference on VLSI, Communication, Analog Designs, Signals and Systems, and Networking (VCASAN-2013), organized by B.N.M. Institute of Technology, Bangalore, India during July 17-19, 2013. The book provides global trends in cutting-edge technologies in electronics and communication engineering. The content of the book is useful to engineers, researchers and academicians as well as industry professionals.

  13. Diode temperature sensor array for measuring and controlling micro scale surface temperature

    International Nuclear Information System (INIS)

    Han, Il Young; Kim, Sung Jin

    2004-01-01

    The needs of micro scale thermal detecting technique are increasing in biology and chemical industry. For example, thermal finger print, Micro PCR(Polymer Chain Reaction), TAS and so on. To satisfy these needs, we developed a DTSA(Diode Temperature Sensor Array) for detecting and controlling the temperature on small surface. The DTSA is fabricated by using VLSI technique. It consists of 32 array of diodes(1,024 diodes) for temperature detection and 8 heaters for temperature control on a 8mm surface area. The working principle of temperature detection is that the forward voltage drop across a silicon diode is approximately proportional to the inverse of the absolute temperature of diode. And eight heaters (1K) made of poly-silicon are added onto a silicon wafer and controlled individually to maintain a uniform temperature distribution across the DTSA. Flip chip packaging used for easy connection of the DTSA. The circuitry for scanning and controlling DTSA are also developed

  14. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  15. Array processing for seismic surface waves

    Energy Technology Data Exchange (ETDEWEB)

    Marano, S.

    2013-07-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries.

  16. Array processing for seismic surface waves

    International Nuclear Information System (INIS)

    Marano, S.

    2013-01-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries

  17. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  18. Adaptive WTA with an analog VLSI neuromorphic learning chip.

    Science.gov (United States)

    Häfliger, Philipp

    2007-03-01

    In this paper, we demonstrate how a particular spike-based learning rule (where exact temporal relations between input and output spikes of a spiking model neuron determine the changes of the synaptic weights) can be tuned to express rate-based classical Hebbian learning behavior (where the average input and output spike rates are sufficient to describe the synaptic changes). This shift in behavior is controlled by the input statistic and by a single time constant. The learning rule has been implemented in a neuromorphic very large scale integration (VLSI) chip as part of a neurally inspired spike signal image processing system. The latter is the result of the European Union research project Convolution AER Vision Architecture for Real-Time (CAVIAR). Since it is implemented as a spike-based learning rule (which is most convenient in the overall spike-based system), even if it is tuned to show rate behavior, no explicit long-term average signals are computed on the chip. We show the rule's rate-based Hebbian learning ability in a classification task in both simulation and chip experiment, first with artificial stimuli and then with sensor input from the CAVIAR system.

  19. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  20. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    Science.gov (United States)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  1. NeuroSeek dual-color image processing infrared focal plane array

    Science.gov (United States)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  2. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  3. Assessment of Measurement Distortions in GNSS Antenna Array Space-Time Processing

    Directory of Open Access Journals (Sweden)

    Thyagaraja Marathe

    2016-01-01

    Full Text Available Antenna array processing techniques are studied in GNSS as effective tools to mitigate interference in spatial and spatiotemporal domains. However, without specific considerations, the array processing results in biases and distortions in the cross-ambiguity function (CAF of the ranging codes. In space-time processing (STP the CAF misshaping can happen due to the combined effect of space-time processing and the unintentional signal attenuation by filtering. This paper focuses on characterizing these degradations for different controlled signal scenarios and for live data from an antenna array. The antenna array simulation method introduced in this paper enables one to perform accurate analyses in the field of STP. The effects of relative placement of the interference source with respect to the desired signal direction are shown using overall measurement errors and profile of the signal strength. Analyses of contributions from each source of distortion are conducted individually and collectively. Effects of distortions on GNSS pseudorange errors and position errors are compared for blind, semi-distortionless, and distortionless beamforming methods. The results from characterization can be useful for designing low distortion filters that are especially important for high accuracy GNSS applications in challenging environments.

  4. Development of a multitechnology FPGA: a reconfigurable architecture for photonic information processing

    Science.gov (United States)

    Mal, Prosenjit; Toshniwal, Kavita; Hawk, Chris; Bhadri, Prashant R.; Beyette, Fred R., Jr.

    2004-06-01

    Over the years, Field Programmable Gate Arrays (FPGAs) have made a profound impact on the electronics industry with rapidly improving semiconductor-manufacturing technology ranging from sub-micron to deep sub-micron processes and equally innovative CAD tools. Though FPGA has revolutionized programmable/reconfigurable digital logic technology, one limitation of current FPGA"s is that the user is limited to strictly electronic designs. Thus, they are not suitable for applications that are not purely electronic, such as optical communications, photonic information processing systems and other multi-technology applications (ex. analog devices, MEMS devices and microwave components). Over recent years, the growing trend has been towards the incorporation of non-traditional device technologies into traditional CMOS VLSI systems. The integration of these technologies requires a new kind of FPGA that can merge conventional FPGA technology with photonic and other multi-technology devices. The proposed new class of field programmable device will extend the flexibility, rapid prototyping and reusability benefits associated with conventional electronic into photonic and multi-technology domain and give rise to the development of a wider class of programmable and embedded integrated systems. This new technology will create a tremendous opportunity for applying the conventional programmable/reconfigurable hardware concepts in other disciplines like photonic information processing. To substantiate this novel architectural concept, we have fabricated proof-of-the-concept CMOS VLSI Multi-technology FPGA (MT-FPGA) chips that include both digital field programmable logic blocks and threshold programmable photoreceivers which are suitable for sensing optical signals. Results from these chips strongly support the feasibility of this new optoelectronic device concept.

  5. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  6. Design Implementation and Testing of a VLSI High Performance ASIC for Extracting the Phase of a Complex Signal

    National Research Council Canada - National Science Library

    Altmeyer, Ronald

    2002-01-01

    This thesis documents the research, circuit design, and simulation testing of a VLSI ASIC which extracts phase angle information from a complex sampled signal using the arctangent relationship: (phi=tan/-1 (Q/1...

  7. Total focusing method with correlation processing of antenna array signals

    Science.gov (United States)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  8. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    Science.gov (United States)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  9. A multichip aVLSI system emulating orientation selectivity of primary visual cortical cells.

    Science.gov (United States)

    Shimonomura, Kazuhiro; Yagi, Tetsuya

    2005-07-01

    In this paper, we designed and fabricated a multichip neuromorphic analog very large scale integrated (aVLSI) system, which emulates the orientation selective response of the simple cell in the primary visual cortex. The system consists of a silicon retina and an orientation chip. An image, which is filtered by a concentric center-surround (CS) antagonistic receptive field of the silicon retina, is transferred to the orientation chip. The image transfer from the silicon retina to the orientation chip is carried out with analog signals. The orientation chip selectively aggregates multiple pixels of the silicon retina, mimicking the feedforward model proposed by Hubel and Wiesel. The chip provides the orientation-selective (OS) outputs which are tuned to 0 degrees, 60 degrees, and 120 degrees. The feed-forward aggregation reduces the fixed pattern noise that is due to the mismatch of the transistors in the orientation chip. The spatial properties of the orientation selective response were examined in terms of the adjustable parameters of the chip, i.e., the number of aggregated pixels and size of the receptive field of the silicon retina. The multichip aVLSI architecture used in the present study can be applied to implement higher order cells such as the complex cell of the primary visual cortex.

  10. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  11. A programmable systolic array correlator as a trigger processor for electron pairs in rich (ring image Cherenkov) counters

    Science.gov (United States)

    Männer, R.

    1989-12-01

    This paper describes a systolic array processor for a ring image Cherenkov counter which is capable of identifying pairs of electron circles with a known radius and a certain minimum distance within 15 μs. The processor is a very flexible and fast device. It consists of 128 x 128 processing elements (PEs), where one PE is assigned to each pixel of the image. All PEs run synchronously at 40 MHz. The identification of electron circles is done by correlating the detector image with the proper circle circumference. Circle centers are found by peak detection in the correlation result. A second correlation with a circle disc allows circles of closed electron pairs to be rejected. The trigger decision is generated if a pseudo adder detects at least two remaining circles. The device is controlled by a freely programmable sequencer. A VLSI chip containing 8 x 8 PEs is being developed using a VENUS design system and will be produced in 2μ CMOS technology.

  12. A programmable systolic array correlator as a trigger processor for electron pairs in RICH (ring image Cherenkov) counters

    International Nuclear Information System (INIS)

    Maenner, R.

    1989-01-01

    This paper describes a systolic array processor for a ring image Cherenkov counter which is capable of identifying pairs of electron circles with a known radius and a certain minimum distance within 15 μs. The processor is a very flexible and fast device. It consists of 128x128 processing elements (PEs), where one PE is assigned to each pixel of the image. All PEs run synchronously at 40 MHz. The identification of electron circles is done by correlating the detector image with the proper circle circumference. Circle centers are found by peak detection in the correlation result. A second correlation with a circle disc allows circles of closed electron pairs to be rejected. The trigger decision is generated if a pseudo adder detects at least two remaining circles. The device is controlled by a freely programmable sequencer. A VLSI chip containing 8x8 PEs is being developed using a VENUS design system and will be produced in 2μ CMOS technology. (orig.)

  13. Solution-processed single-wall carbon nanotube transistor arrays for wearable display backplanes

    Directory of Open Access Journals (Sweden)

    Byeong-Cheol Kang

    2018-01-01

    Full Text Available In this paper, we demonstrate solution-processed single-wall carbon nanotube thin-film transistor (SWCNT-TFT arrays with polymeric gate dielectrics on the polymeric substrates for wearable display backplanes, which can be directly attached to the human body. The optimized SWCNT-TFTs without any buffer layer on flexible substrates exhibit a linear field-effect mobility of 1.5cm2/V-s and a threshold voltage of around 0V. The statistical plot of the key device metrics extracted from 35 SWCNT-TFTs which were fabricated in different batches at different times conclusively support that we successfully demonstrated high-performance solution-processed SWCNT-TFT arrays which demand excellent uniformity in the device performance. We also investigate the operational stability of wearable SWCNT-TFT arrays against an applied strain of up to 40%, which is the essential for a harsh degree of strain on human body. We believe that the demonstration of flexible SWCNT-TFT arrays which were fabricated by all solution-process except the deposition of metal electrodes at process temperature below 130oC can open up new routes for wearable display backplanes.

  14. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    Science.gov (United States)

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Operation of a Fast-RICH Prototype with VLSI readout electronics

    Energy Technology Data Exchange (ETDEWEB)

    Guyonnet, J.L. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Arnold, R. (CRN, IN2P3-CNRS / Louis Pasteur Univ., Strasbourg (France)); Jobez, J.P. (Coll. de France, 75 - Paris (France)); Seguinot, J. (Coll. de France, 75 - Paris (France)); Ypsilantis, T. (Coll. de France, 75 - Paris (France)); Chesi, E. (CERN / ECP Div., Geneve (Switzerland)); Racz, A. (CERN / ECP Div., Geneve (Switzerland)); Egger, J. (Paul Scherrer Inst., Villigen (Switzerland)); Gabathuler, K. (Paul Scherrer Inst., Villigen (Switzerland)); Joram, C. (Karlsruhe Univ. (Germany)); Adachi, I. (KEK, Tsukuba (Japan)); Enomoto, R. (KEK, Tsukuba (Japan)); Sumiyoshi, T. (KEK, Tsukuba (Japan))

    1994-04-01

    We discuss the first test results, obtained with cosmic rays, of a full-scale Fast-RICH Prototype with proximity-focused 10 mm thick LiF (CaF[sub 2]) solid radiators, TEA as photosensor in CH[sub 4], and readout of 12 x 10[sup 3] cathode pads (5.334 x 6.604 mm[sup 2]) using dedicated VLSI electronics we have developed. The number of detected photoelectrons is 7.7 (6.9) for the CaF[sub 2] (LiF) radiator, very near to the expected values 6.4 (7.5) from Monte Carlo simulations. The single-photon Cherenkov angle resolution [sigma][sub [theta

  16. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  17. A multi-step electrochemical etching process for a three-dimensional micro probe array

    International Nuclear Information System (INIS)

    Kim, Yoonji; Youn, Sechan; Cho, Young-Ho; Park, HoJoon; Chang, Byeung Gyu; Oh, Yong Soo

    2011-01-01

    We present a simple, fast, and cost-effective process for three-dimensional (3D) micro probe array fabrication using multi-step electrochemical metal foil etching. Compared to the previous electroplating (add-on) process, the present electrochemical (subtractive) process results in well-controlled material properties of the metallic microstructures. In the experimental study, we describe the single-step and multi-step electrochemical aluminum foil etching processes. In the single-step process, the depth etch rate and the bias etch rate of an aluminum foil have been measured as 1.50 ± 0.10 and 0.77 ± 0.03 µm min −1 , respectively. On the basis of the single-step process results, we have designed and performed the two-step electrochemical etching process for the 3D micro probe array fabrication. The fabricated 3D micro probe array shows the vertical and lateral fabrication errors of 15.5 ± 5.8% and 3.3 ± 0.9%, respectively, with the surface roughness of 37.4 ± 9.6 nm. The contact force and the contact resistance of the 3D micro probe array have been measured to be 24.30 ± 0.98 mN and 2.27 ± 0.11 Ω, respectively, for an overdrive of 49.12 ± 1.25 µm.

  18. Fundamentals of spherical array processing

    CERN Document Server

    Rafaely, Boaz

    2015-01-01

    This book provides a comprehensive introduction to the theory and practice of spherical microphone arrays. It is written for graduate students, researchers and engineers who work with spherical microphone arrays in a wide range of applications.   The first two chapters provide the reader with the necessary mathematical and physical background, including an introduction to the spherical Fourier transform and the formulation of plane-wave sound fields in the spherical harmonic domain. The third chapter covers the theory of spatial sampling, employed when selecting the positions of microphones to sample sound pressure functions in space. Subsequent chapters present various spherical array configurations, including the popular rigid-sphere-based configuration. Beamforming (spatial filtering) in the spherical harmonics domain, including axis-symmetric beamforming, and the performance measures of directivity index and white noise gain are introduced, and a range of optimal beamformers for spherical arrays, includi...

  19. Carbon nanotube based VLSI interconnects analysis and design

    CERN Document Server

    Kaushik, Brajesh Kumar

    2015-01-01

    The brief primarily focuses on the performance analysis of CNT based interconnects in current research scenario. Different CNT structures are modeled on the basis of transmission line theory. Performance comparison for different CNT structures illustrates that CNTs are more promising than Cu or other materials used in global VLSI interconnects. The brief is organized into five chapters which mainly discuss: (1) an overview of current research scenario and basics of interconnects; (2) unique crystal structures and the basics of physical properties of CNTs, and the production, purification and applications of CNTs; (3) a brief technical review, the geometry and equivalent RLC parameters for different single and bundled CNT structures; (4) a comparative analysis of crosstalk and delay for different single and bundled CNT structures; and (5) various unique mixed CNT bundle structures and their equivalent electrical models.

  20. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  1. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  2. Signal Processing for a Lunar Array: Minimizing Power Consumption

    Science.gov (United States)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  3. Wideband aperture array using RF channelizers and massively parallel digital 2D IIR filterbank

    Science.gov (United States)

    Sengupta, Arindam; Madanayake, Arjuna; Gómez-García, Roberto; Engeberg, Erik D.

    2014-05-01

    Wideband receive-mode beamforming applications in wireless location, electronically-scanned antennas for radar, RF sensing, microwave imaging and wireless communications require digital aperture arrays that offer a relatively constant far-field beam over several octaves of bandwidth. Several beamforming schemes including the well-known true time-delay and the phased array beamformers have been realized using either finite impulse response (FIR) or fast Fourier transform (FFT) digital filter-sum based techniques. These beamforming algorithms offer the desired selectivity at the cost of a high computational complexity and frequency-dependant far-field array patterns. A novel approach to receiver beamforming is the use of massively parallel 2-D infinite impulse response (IIR) fan filterbanks for the synthesis of relatively frequency independent RF beams at an order of magnitude lower multiplier complexity compared to FFT or FIR filter based conventional algorithms. The 2-D IIR filterbanks demand fast digital processing that can support several octaves of RF bandwidth, fast analog-to-digital converters (ADCs) for RF-to-bits type direct conversion of wideband antenna element signals. Fast digital implementation platforms that can realize high-precision recursive filter structures necessary for real-time beamforming, at RF radio bandwidths, are also desired. We propose a novel technique that combines a passive RF channelizer, multichannel ADC technology, and single-phase massively parallel 2-D IIR digital fan filterbanks, realized at low complexity using FPGA and/or ASIC technology. There exists native support for a larger bandwidth than the maximum clock frequency of the digital implementation technology. We also strive to achieve More-than-Moore throughput by processing a wideband RF signal having content with N-fold (B = N Fclk/2) bandwidth compared to the maximum clock frequency Fclk Hz of the digital VLSI platform under consideration. Such increase in bandwidth is

  4. Active terahertz imaging with Ne indicator lamp detector arrays

    Science.gov (United States)

    Kopeika, N. S.; Abramovich, A.; Yadid-Pecht, O.; Yitzhaky, Y.

    2009-08-01

    The advantages of terahertz (THz) imaging are well known. They penetrate well most non-conducting media and there are no known biological hazards, This makes such imaging systems important for homeland security, as they can be used to image concealed objects and often into rooms or buildings from the outside. There are also biomedical applications that are arising. Unfortunately, THz imaging is quite expensive, especially for real time systems, largely because of the price of the detector. Bolometers and pyroelectric detectors can each easily cost at least hundreds of dollars if not more, thus making focal plane arrays of them quite expensive. We have found that common miniature commercial neon indicator lamps costing typically about 30 cents each exhibit high sensitivity to THz radiation [1-3], with microsecond order rise times, thus making them excellent candidates for such focal plane arrays. NEP is on the order of 10-10 W/Hz1/2. Significant improvement of detection performance is expected when heterodyne detection is used Efforts are being made to develop focal plane array imagers using such devices at 300 GHz. Indeed, preliminary images using 4x4 arrays have already been obtained. An 8x8 VLSI board has been developed and is presently being tested. Since no similar imaging systems have been developed previously, there are many new problems to be solved with such a novel and unconventional imaging system. These devices act as square law detectors, with detected signal proportional to THz power. This allows them to act as mixers in heterodyne detection, thus allowing NEP to be reduced further by almost two orders of magnitude. Plans are to expand the arrays to larger sizes, and to employ super resolution techniques to improve image quality beyond that ordinarily obtainable at THz frequencies.

  5. Vlsi implementation of flexible architecture for decision tree classification in data mining

    Science.gov (United States)

    Sharma, K. Venkatesh; Shewandagn, Behailu; Bhukya, Shankar Nayak

    2017-07-01

    The Data mining algorithms have become vital to researchers in science, engineering, medicine, business, search and security domains. In recent years, there has been a terrific raise in the size of the data being collected and analyzed. Classification is the main difficulty faced in data mining. In a number of the solutions developed for this problem, most accepted one is Decision Tree Classification (DTC) that gives high precision while handling very large amount of data. This paper presents VLSI implementation of flexible architecture for Decision Tree classification in data mining using c4.5 algorithm.

  6. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  7. Radiation hardness tests with a demonstrator preamplifier circuit manufactured in silicon on sapphire (SOS) VLSI technology

    International Nuclear Information System (INIS)

    Bingefors, N.; Ekeloef, T.; Eriksson, C.; Paulsson, M.; Moerk, G.; Sjoelund, A.

    1992-01-01

    Samples of the preamplifier circuit, as well as of separate n and p channel transistors of the type contained in the circuit, were irradiated with gammas from a 60 Co source up to an integrated dose of 3 Mrad (30 kGy). The VLSI manufacturing technology used is the SOS4 process of ABB Hafo. A first analysis of the tests shows that the performance of the amplifier remains practically unaffected by the radiation for total doses up to 1 Mrad. At higher doses up to 3 Mrad the circuit amplification factor decreases by a factor between 4 and 5 whereas the output noise level remains unchanged. It is argued that it may be possible to reduce the decrease in amplification factor in future by optimizing the amplifier circuit design further. (orig.)

  8. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  9. VLSI Architectures for Sliding-Window-Based Space-Time Turbo Trellis Code Decoders

    Directory of Open Access Journals (Sweden)

    Georgios Passas

    2012-01-01

    Full Text Available The VLSI implementation of SISO-MAP decoders used for traditional iterative turbo coding has been investigated in the literature. In this paper, a complete architectural model of a space-time turbo code receiver that includes elementary decoders is presented. These architectures are based on newly proposed building blocks such as a recursive add-compare-select-offset (ACSO unit, A-, B-, Γ-, and LLR output calculation modules. Measurements of complexity and decoding delay of several sliding-window-technique-based MAP decoder architectures and a proposed parameter set lead to defining equations and comparison between those architectures.

  10. Initial beam test results from a silicon-strip detector with VLSI readout

    International Nuclear Information System (INIS)

    Adolphsen, C.; Litke, A.; Schwarz, A.

    1986-01-01

    Silicon detectors with 256 strips, having a pitch of 25 μm, and connected to two 128 channel NMOS VLSI chips each (Microplex), have been tested in relativistic charged particle beams at CERN and at the Stanford Linear Accelerator Center. The readout chips have an input channel pitch of 47.5 μm and a single multiplexed output which provides voltages proportional to the integrated charge from each strip. The most probable signal height from minimum ionizing tracks was 15 times the rms noise in any single channel. Two-track traversals with a separation of 100 μm were cleanly resolved

  11. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  12. Techniques for Computing the DFT Using the Residue Fermat Number Systems and VLSI

    Science.gov (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Pei, D. Y.; Reed, I. S.

    1985-01-01

    The integer complex multiplier and adder over the direct sum of two copies of a finite field is specialized to the direct sum of the rings of integers modulo Fermat numbers. Such multiplications and additions can be used in the implementation of a discrete Fourier transform (DFT) of a sequence of complex numbers. The advantage of the present approach is that the number of multiplications needed for the DFT can be reduced substantially over the previous approach. The architectural designs using this approach are regular, simple, expandable and, therefore, naturally suitable for VLSI implementation.

  13. VLSI-based video event triggering for image data compression

    Science.gov (United States)

    Williams, Glenn L.

    1994-02-01

    Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.

  14. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  15. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  16. Blind Time-Frequency Analysis for Source Discrimination in Multisensor Array Processing

    National Research Council Canada - National Science Library

    Amin, Moeness

    1999-01-01

    .... We have clearly demonstrated, through analysis and simulations, the offerings of time-frequency distributions in solving key problems in sensor array processing, including direction finding, source...

  17. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    Science.gov (United States)

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  18. An Asynchronous Low Power and High Performance VLSI Architecture for Viterbi Decoder Implemented with Quasi Delay Insensitive Templates

    Directory of Open Access Journals (Sweden)

    T. Kalavathi Devi

    2015-01-01

    Full Text Available Convolutional codes are comprehensively used as Forward Error Correction (FEC codes in digital communication systems. For decoding of convolutional codes at the receiver end, Viterbi decoder is often used to have high priority. This decoder meets the demand of high speed and low power. At present, the design of a competent system in Very Large Scale Integration (VLSI technology requires these VLSI parameters to be finely defined. The proposed asynchronous method focuses on reducing the power consumption of Viterbi decoder for various constraint lengths using asynchronous modules. The asynchronous designs are based on commonly used Quasi Delay Insensitive (QDI templates, namely, Precharge Half Buffer (PCHB and Weak Conditioned Half Buffer (WCHB. The functionality of the proposed asynchronous design is simulated and verified using Tanner Spice (TSPICE in 0.25 µm, 65 nm, and 180 nm technologies of Taiwan Semiconductor Manufacture Company (TSMC. The simulation result illustrates that the asynchronous design techniques have 25.21% of power reduction compared to synchronous design and work at a speed of 475 MHz.

  19. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    Science.gov (United States)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  20. An electron undulating ring for VLSI lithography

    International Nuclear Information System (INIS)

    Tomimasu, T.; Mikado, T.; Noguchi, T.; Sugiyama, S.; Yamazaki, T.

    1985-01-01

    The development of the ETL storage ring ''TERAS'' as an undulating ring has been continued to achieve a wide area exposure of synchrotron radiation (SR) in VLSI lithography. Stable vertical and horizontal undulating motions of stored beams are demonstrated around a horizontal design orbit of TERAS, using two small steering magnets of which one is used for vertical undulating and another for horizontal one. Each steering magnet is inserted into one of the periodic configulation of guide field elements. As one of useful applications of undulaing electron beams, a vertically wide exposure of SR has been demonstrated in the SR lithography. The maximum vertical deviation from the design orbit nCcurs near the steering magnet. The maximum vertical tilt angle of the undulating beam near the nodes is about + or - 2mrad for a steering magnetic field of 50 gauss. Another proposal is for hith-intensity, uniform and wide exposure of SR from a wiggler installed in TERAS, using vertical and horizontal undulating motions of stored beams. A 1.4 m long permanent magnet wiggler has been installed for this purpose in this April

  1. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Warburton, W.K., E-mail: bill@xia.com [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Harris, J.T. [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Friedrich, S. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100–2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays – currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I–V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  2. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    Science.gov (United States)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  3. PERFORMANCE OF LEAKAGE POWER MINIMIZATION TECHNIQUE FOR CMOS VLSI TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    T. Tharaneeswaran

    2012-06-01

    Full Text Available Leakage power of CMOS VLSI Technology is a great concern. To reduce leakage power in CMOS circuits, a Leakage Power Minimiza-tion Technique (LPMT is implemented in this paper. Leakage cur-rents are monitored and compared. The Comparator kicks the charge pump to give body voltage (Vbody. Simulations of these circuits are done using TSMC 0.35µm technology with various operating temper-atures. Current steering Digital-to-Analog Converter (CSDAC is used as test core to validate the idea. The Test core (eg.8-bit CSDAC had power consumption of 347.63 mW. LPMT circuit alone consumes power of 6.3405 mW. This technique results in reduction of leakage power of 8-bit CSDAC by 5.51mW and increases the reliability of test core. Mentor Graphics ELDO and EZ-wave are used for simulations.

  4. Ant System-Corner Insertion Sequence: An Efficient VLSI Hard Module Placer

    Directory of Open Access Journals (Sweden)

    HOO, C.-S.

    2013-02-01

    Full Text Available Placement is important in VLSI physical design as it determines the time-to-market and chip's reliability. In this paper, a new floorplan representation which couples with Ant System, namely Corner Insertion Sequence (CIS is proposed. Though CIS's search complexity is smaller than the state-of-the-art representation Corner Sequence (CS, CIS adopts a preset boundary on the placement and hence, leading to search bound similar to CS. This enables the previous unutilized corner edges to become viable. Also, the redundancy of CS representation is eliminated in CIS leads to a lower search complexity of CIS. Experimental results on Microelectronics Center of North Carolina (MCNC hard block benchmark circuits show that the proposed algorithm performs comparably in terms of area yet at least two times faster than CS.

  5. Emergent Auditory Feature Tuning in a Real-Time Neuromorphic VLSI System.

    Science.gov (United States)

    Sheik, Sadique; Coath, Martin; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas; Chicca, Elisabetta

    2012-01-01

    Many sounds of ecological importance, such as communication calls, are characterized by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamo-cortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP), which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectro-temporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step toward the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  6. Emergent auditory feature tuning in a real-time neuromorphic VLSI system

    Directory of Open Access Journals (Sweden)

    Sadique eSheik

    2012-02-01

    Full Text Available Many sounds of ecological importance, such as communication calls, are characterised by time-varying spectra. However, most neuromorphic auditory models to date have focused on distinguishing mainly static patterns, under the assumption that dynamic patterns can be learned as sequences of static ones. In contrast, the emergence of dynamic feature sensitivity through exposure to formative stimuli has been recently modeled in a network of spiking neurons based on the thalamocortical architecture. The proposed network models the effect of lateral and recurrent connections between cortical layers, distance-dependent axonal transmission delays, and learning in the form of Spike Timing Dependent Plasticity (STDP, which effects stimulus-driven changes in the pattern of network connectivity. In this paper we demonstrate how these principles can be efficiently implemented in neuromorphic hardware. In doing so we address two principle problems in the design of neuromorphic systems: real-time event-based asynchronous communication in multi-chip systems, and the realization in hybrid analog/digital VLSI technology of neural computational principles that we propose underlie plasticity in neural processing of dynamic stimuli. The result is a hardware neural network that learns in real-time and shows preferential responses, after exposure, to stimuli exhibiting particular spectrotemporal patterns. The availability of hardware on which the model can be implemented, makes this a significant step towards the development of adaptive, neurobiologically plausible, spike-based, artificial sensory systems.

  7. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  8. Data processing

    International Nuclear Information System (INIS)

    Cousot, P.

    1988-01-01

    The 1988 progress report of the Data Processing laboratory (Polytechnic School, France), is presented. The laboratory research fields are: the semantics, the tests and the semantic analysis of the codes, the formal calculus, the software applications, the algorithms, the neuron networks and VLSI (Very Large Scale Integration). The investigations concerning the polynomial rings are performed by means of the standard basis approach. Among the research topics, the Pascal codes, the parallel processing, the combinatorial, statistical and asymptotic properties of the fundamental data processing tools, the signal processing and the pattern recognition. The published papers, the congress communications and the thesis are also included [fr

  9. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    Science.gov (United States)

    Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.

    1987-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.

  10. Future evolution of the Fast TracKer (FTK) processing unit

    CERN Document Server

    Gentsos, C; The ATLAS collaboration; Giannetti, P; Magalotti, D; Nikolaidis, S

    2014-01-01

    The Fast Tracker (FTK) processor [1] for the ATLAS experiment has a computing core made of 128 Processing Units that reconstruct tracks in the silicon detector in a ~100 μsec deep pipeline. The track parameter resolution provided by FTK enables the HLT trigger to identify efficiently and reconstruct significant samples of fermionic Higgs decays. Data processing speed is achieved with custom VLSI pattern recognition, linearized track fitting executed inside modern FPGAs, pipelining, and parallel processing. One large FPGA executes full resolution track fitting inside low resolution candidate tracks found by a set of 16 custom Asic devices, called Associative Memories (AM chips) [2]. The FTK dual structure, based on the cooperation of VLSI dedicated AM and programmable FPGAs, is maintained to achieve further technology performance, miniaturization and integration of the current state of the art prototypes. This allows to fully exploit new applications within and outside the High Energy Physics field. We plan t...

  11. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  12. A novel VLSI processor for high-rate, high resolution spectroscopy

    CERN Document Server

    Pullia, Antonio; Gatti, E; Longoni, A; Buttler, W

    2000-01-01

    A novel time-variant VLSI shaper amplifier, suitable for multi-anode Silicon Drift Detectors or other multi-element solid-state X-ray detection systems, is proposed. The new read-out scheme has been conceived for demanding applications with synchrotron light sources, such as X-ray holography or EXAFS, where both high count-rates and high-energy resolutions are required. The circuit is of the linear time-variant class, accepts randomly distributed events and features: a finite-width (1-10 mu s) quasi-optimal weight function, an ultra-low-level energy discrimination (approx 150 eV), and a full compatibility for monolithic integration in CMOS technology. Its impulse response has a staircase-like shape, but the weight function (which is in general different from the impulse response in time-variant systems) is quasi trapezoidal. The operation principles of the new scheme as well as the first experimental results obtained with a prototype of the circuit are presented and discussed in the work.

  13. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  14. Processing and display of medical three dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Darier, P.

    1985-01-01

    Imaging modalities such as X-ray computerized Tomography (CT), Nuclear Medicine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging. We are currently developing experimental software that allows the analysis, processing and display of 3-D arrays of numerical data that are organized in a related hierarchical data structure using OCTREE (octal-tree) encoding technique based on a recursive subdivision of the data volume. The OCTREE encoding structure is an extension of the two-dimensional tree structure: the quadtree, developed for image processing applications. Before any operations, the 3-D array of data is OCTREE encoded, thereafter all processings are concerned with the encoded object. The elementary process for the elaboration of a synthetic image includes: conditioning the volume: volume partition (numerical and spatial segmentation), choice of the view-point..., two dimensional display, either by spatial integration (radiography) or by shaded surface representation. This paper introduces these different concepts and specifies the advantages of OCTREE encoding techniques in realizing these operations. Furthermore the application of the OCTREE encoding scheme to the display of 3-D medical volumes generated from multiple CT scans is presented

  15. Using adaptive antenna array in LTE with MIMO for space-time processing

    Directory of Open Access Journals (Sweden)

    Abdourahamane Ahmed Ali

    2015-04-01

    Full Text Available The actual methods of improvement the existent wireless transmission systems are proposed. Mathematical apparatus is considered and proved by models, graph of which are shown, using the adaptive array antenna in LTE with MIMO for space-time processing. The results show that improvements, which are joined with space-time processing, positively reflects on LTE cell size or on throughput

  16. A novel, substrate independent three-step process for the growth of uniform ZnO nanorod arrays

    International Nuclear Information System (INIS)

    Byrne, D.; McGlynn, E.; Henry, M.O.; Kumar, K.; Hughes, G.

    2010-01-01

    We report a three-step deposition process for uniform arrays of ZnO nanorods, involving chemical bath deposition of aligned seed layers followed by nanorod nucleation sites and subsequent vapour phase transport growth of nanorods. This combines chemical bath deposition techniques, which enable substrate independent seeding and nucleation site generation with vapour phase transport growth of high crystalline and optical quality ZnO nanorod arrays. Our data indicate that the three-step process produces uniform nanorod arrays with narrow and rather monodisperse rod diameters (∼ 70 nm) across substrates of centimetre dimensions. X-ray photoelectron spectroscopy, scanning electron microscopy and X-ray diffraction were used to study the growth mechanism and characterise the nanostructures.

  17. Sampling phased array, a new technique for ultrasonic signal processing and imaging now available to industry

    OpenAIRE

    Verkooijen, J.; Bulavinov, A.

    2008-01-01

    Over the past 10 years the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called "Sampling Phased Array" has been developed in the Fraunhofer Institute for non-destructive testing [1]. It realizes a unique approach of measurement and processing of ultrasonic signals. The s...

  18. Arrays of surface-normal electroabsorption modulators for the generation and signal processing of microwave photonics signals

    NARCIS (Netherlands)

    Noharet, Bertrand; Wang, Qin; Platt, Duncan; Junique, Stéphane; Marpaung, D.A.I.; Roeloffzen, C.G.H.

    2011-01-01

    The development of an array of 16 surface-normal electroabsorption modulators operating at 1550nm is presented. The modulator array is dedicated to the generation and processing of microwave photonics signals, targeting a modulation bandwidth in excess of 5GHz. The hybrid integration of the

  19. SAR processing with stepped chirps and phased array antennas.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  20. Digital VLSI systems design a design manual for implementation of projects on FPGAs and ASICs using Verilog

    CERN Document Server

    Ramachandran, S

    2007-01-01

    Digital VLSI Systems Design is written for an advanced level course using Verilog and is meant for undergraduates, graduates and research scholars of Electrical, Electronics, Embedded Systems, Computer Engineering and interdisciplinary departments such as Bio Medical, Mechanical, Information Technology, Physics, etc. It serves as a reference design manual for practicing engineers and researchers as well. Diligent freelance readers and consultants may also start using this book with ease. The book presents new material and theory as well as synthesis of recent work with complete Project Designs

  1. Solution processed bismuth sulfide nanowire array core/silver shuffle shell solar cells

    NARCIS (Netherlands)

    Cao, Y.; Bernechea, M.; Maclachlan, A.; Zardetto, V.; Creatore, M.; Haque, S.A.; Konstantatos, G.

    2015-01-01

    Low bandgap inorganic semiconductor nanowires have served as building blocks in solution processed solar cells to improve their power conversion capacity and reduce fabrication cost. In this work, we first reported bismuth sulfide nanowire arrays grown from colloidal seeds on a transparent

  2. Structural control of ultra-fine CoPt nanodot arrays via electrodeposition process

    Energy Technology Data Exchange (ETDEWEB)

    Wodarz, Siggi [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan); Hasegawa, Takashi; Ishio, Shunji [Department of Materials Science, Akita University, Akita City 010-8502 (Japan); Homma, Takayuki, E-mail: t.homma@waseda.jp [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan)

    2017-05-15

    CoPt nanodot arrays were fabricated by combining electrodeposition and electron beam lithography (EBL) for the use of bit-patterned media (BPM). To achieve precise control of deposition uniformity and coercivity of the CoPt nanodot arrays, their crystal structure and magnetic properties were controlled by controlling the diffusion state of metal ions from the initial deposition stage with the application of bath agitation. Following bath agitation, the composition gradient of the CoPt alloy with thickness was mitigated to have a near-ideal alloy composition of Co:Pt =80:20, which induces epitaxial-like growth from Ru substrate, thus resulting in the improvement of the crystal orientation of the hcp (002) structure from its initial deposition stages. Furthermore, the cross-sectional transmission electron microscope (TEM) analysis of the nanodots deposited with bath agitation showed CoPt growth along its c-axis oriented in the perpendicular direction, having uniform lattice fringes on the hcp (002) plane from the Ru underlayer interface, which is a significant factor to induce perpendicular magnetic anisotropy. Magnetic characterization of the CoPt nanodot arrays showed increase in the perpendicular coercivity and squareness of the hysteresis loops from 2.0 kOe and 0.64 (without agitation) to 4.0 kOe and 0.87 with bath agitation. Based on the detailed characterization of nanodot arrays, the precise crystal structure control of the nanodot arrays with ultra-high recording density by electrochemical process was successfully demonstrated. - Highlights: • Ultra-fine CoPt nanodot arrays were fabricated by electrodeposition. • Crystallinity of hcp (002) was improved with uniform composition formation. • Uniform formation of hcp lattices leads to an increase in the coercivity.

  3. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  4. Analog VLSI Models of Range-Tuned Neurons in the Bat Echolocation System

    Directory of Open Access Journals (Sweden)

    Horiuchi Timothy

    2003-01-01

    Full Text Available Bat echolocation is a fascinating topic of research for both neuroscientists and engineers, due to the complex and extremely time-constrained nature of the problem and its potential for application to engineered systems. In the bat's brainstem and midbrain exist neural circuits that are sensitive to the specific difference in time between the outgoing sonar vocalization and the returning echo. While some of the details of the neural mechanisms are known to be species-specific, a basic model of reafference-triggered, postinhibitory rebound timing is reasonably well supported by available data. We have designed low-power, analog VLSI circuits to mimic this mechanism and have demonstrated range-dependent outputs for use in a real-time sonar system. These circuits are being used to implement range-dependent vocalization amplitude, vocalization rate, and closest target isolation.

  5. Processing and display of three-dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Antoine, M.; Darier, P.

    1986-04-01

    The analysis of three-dimensional (3-D) arrays of numerical data from medical, industrial or scientific imaging, by synthetic generation of realistic images, has been widely developed. The Octree encoding, that organizes the volume data in a hierarchical tree structure, has some interesting features for 3-D arrays of data processing. The Octree encoding method, based on the recursive subdivision of a 3-D array, is an extension of the Quadtree encoding in the two-dimensional plane. We have developed a software package to validate the basic Octree encoding methodology for some manipulation and display operations of volume data. The contribution introduces the technique we have used (called ''overlay technique'') to make the projection operation of an Octree on a Quadtree encoded image plane. The application of this technique to the hidden surface display is presented [fr

  6. Real time track finding in a drift chamber with a VLSI neural network

    International Nuclear Information System (INIS)

    Lindsey, C.S.; Denby, B.; Haggerty, H.; Johns, K.

    1992-01-01

    In a test setup, a hardware neural network determined track parameters of charged particles traversing a drift chamber. Voltages proportional to the drift times in 6 cells of the 3-layer chamber were inputs to the Intel ETANN neural network chip which had been trained to give the slope and intercept of tracks. We compare network track parameters to those obtained from off-line track fits. To our knowledge this is the first on-line application of a VLSI neural network to a high energy physics detector. This test explored the potential of the chip and the practical problems of using it in a real world setting. We compare the chip performance to a neural network simulation on a conventional computer. We discuss possible applications of the chip in high energy physics detector triggers. (orig.)

  7. Smart Sensors: Why and when the origin was and why and where the future will be

    Science.gov (United States)

    Corsi, C.

    2013-12-01

    Smart Sensors is a technique developed in the 70's when the processing capabilities, based on readout integrated with signal processing, was still far from the complexity needed in advanced IR surveillance and warning systems, because of the enormous amount of noise/unwanted signals emitted by operating scenario especially in military applications. The Smart Sensors technology was kept restricted within a close military environment exploding in applications and performances in the 90's years thanks to the impressive improvements in the integrated signal read-out and processing achieved by CCD-CMOS technologies in FPA. In fact the rapid advances of "very large scale integration" (VLSI) processor technology and mosaic EO detector array technology allowed to develop new generations of Smart Sensors with much improved signal processing by integrating microcomputers and other VLSI signal processors. inside the sensor structure achieving some basic functions of living eyes (dynamic stare, non-uniformity compensation, spatial and temporal filtering). New and future technologies (Nanotechnology, Bio-Organic Electronics, Bio-Computing) are lightning a new generation of Smart Sensors extending the Smartness from the Space-Time Domain to Spectroscopic Functional Multi-Domain Signal Processing. History and future forecasting of Smart Sensors will be reported.

  8. Fabrication of Aligned Polyaniline Nanofiber Array via a Facile Wet Chemical Process.

    Science.gov (United States)

    Sun, Qunhui; Bi, Wu; Fuller, Thomas F; Ding, Yong; Deng, Yulin

    2009-06-17

    In this work, we demonstrate for the first time a template free approach to synthesize aligned polyaniline nanofiber (PN) array on a passivated gold (Au) substrate via a facile wet chemical process. The Au surface was first modified using 4-aminothiophenol (4-ATP) to afford the surface functionality, followed subsequently by an oxidation polymerization of aniline (AN) monomer in an aqueous medium using ammonium persulfate as the oxidant and tartaric acid as the doping agent. The results show that a vertically aligned PANI nanofiber array with individual fiber diameters of ca. 100 nm, heights of ca. 600 nm and a packing density of ca. 40 pieces·µm(-2) , was synthesized. Copyright © 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Radar techniques using array antennas

    CERN Document Server

    Wirth, Wulf-Dieter

    2013-01-01

    Radar Techniques Using Array Antennas is a thorough introduction to the possibilities of radar technology based on electronic steerable and active array antennas. Topics covered include array signal processing, array calibration, adaptive digital beamforming, adaptive monopulse, superresolution, pulse compression, sequential detection, target detection with long pulse series, space-time adaptive processing (STAP), moving target detection using synthetic aperture radar (SAR), target imaging, energy management and system parameter relations. The discussed methods are confirmed by simulation stud

  10. Frequency Diverse Array Radar Signal Processing via Space-Range-Doppler Focus (SRDF Method

    Directory of Open Access Journals (Sweden)

    Chen Xiaolong

    2018-04-01

    Full Text Available To meet the urgent demand of low-observable moving target detection in complex environments, a novel method of Frequency Diverse Array (FDA radar signal processing method based on Space-Rang-Doppler Focusing (SRDF is proposed in this paper. The current development status of the FDA radar, the design of the array structure, beamforming, and joint estimation of distance and angle are systematically reviewed. The extra degrees of freedom provided by FDA radar are fully utilizsed, which include the Degrees Of Freedom (DOFs of the transmitted waveform, the location of array elements, correlation of beam azimuth and distance, and the long dwell time, which are also the DOFs in joint spatial (angle, distance, and frequency (Doppler dimensions. Simulation results show that the proposed method has the potential of improving target detection and parameter estimation for weak moving targets in complex environments and has broad application prospects in clutter and interference suppression, moving target refinement, etc..

  11. Programmable cellular arrays. Faults testing and correcting in cellular arrays

    International Nuclear Information System (INIS)

    Cercel, L.

    1978-03-01

    A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)

  12. Assembly and Integration Process of the First High Density Detector Array for the Atacama Cosmology Telescope

    Science.gov (United States)

    Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.

    2016-01-01

    The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.

  13. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    International Nuclear Information System (INIS)

    Yan, Wensheng; Gu, Min; Cumming, Benjamin P

    2015-01-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values. (paper)

  14. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    Science.gov (United States)

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  15. Digital VLSI design with Verilog a textbook from Silicon Valley Polytechnic Institute

    CERN Document Server

    Williams, John Michael

    2014-01-01

    This book is structured as a step-by-step course of study along the lines of a VLSI integrated circuit design project.  The entire Verilog language is presented, from the basics to everything necessary for synthesis of an entire 70,000 transistor, full-duplex serializer-deserializer, including synthesizable PLLs.  The author includes everything an engineer needs for in-depth understanding of the Verilog language:  Syntax, synthesis semantics, simulation, and test. Complete solutions for the 27 labs are provided in the downloadable files that accompany the book.  For readers with access to appropriate electronic design tools, all solutions can be developed, simulated, and synthesized as described in the book.   A partial list of design topics includes design partitioning, hierarchy decomposition, safe coding styles, back annotation, wrapper modules, concurrency, race conditions, assertion-based verification, clock synchronization, and design for test.   A concluding presentation of special topics inclu...

  16. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    Science.gov (United States)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  17. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  18. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  19. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Science.gov (United States)

    McEwan, Alistair; van Schaik, André

    2003-12-01

    The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a) rate level functions for onset and steady-state response, (b) recovery after masking, (c) additivity, (d) two-component adaptation, (e) phase locking, (f) recovery of spontaneous activity, and (g) computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  20. Biophysical Neural Spiking, Bursting, and Excitability Dynamics in Reconfigurable Analog VLSI.

    Science.gov (United States)

    Yu, T; Sejnowski, T J; Cauwenberghs, G

    2011-10-01

    We study a range of neural dynamics under variations in biophysical parameters underlying extended Morris-Lecar and Hodgkin-Huxley models in three gating variables. The extended models are implemented in NeuroDyn, a four neuron, twelve synapse continuous-time analog VLSI programmable neural emulation platform with generalized channel kinetics and biophysical membrane dynamics. The dynamics exhibit a wide range of time scales extending beyond 100 ms neglected in typical silicon models of tonic spiking neurons. Circuit simulations and measurements show transition from tonic spiking to tonic bursting dynamics through variation of a single conductance parameter governing calcium recovery. We similarly demonstrate transition from graded to all-or-none neural excitability in the onset of spiking dynamics through the variation of channel kinetic parameters governing the speed of potassium activation. Other combinations of variations in conductance and channel kinetic parameters give rise to phasic spiking and spike frequency adaptation dynamics. The NeuroDyn chip consumes 1.29 mW and occupies 3 mm × 3 mm in 0.5 μm CMOS, supporting emerging developments in neuromorphic silicon-neuron interfaces.

  1. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  2. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    Science.gov (United States)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  3. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  4. Numerical Simulation of the Diffusion Processes in Nanoelectrode Arrays Using an Axial Neighbor Symmetry Approximation.

    Science.gov (United States)

    Peinetti, Ana Sol; Gilardoni, Rodrigo S; Mizrahi, Martín; Requejo, Felix G; González, Graciela A; Battaglini, Fernando

    2016-06-07

    Nanoelectrode arrays have introduced a complete new battery of devices with fascinating electrocatalytic, sensitivity, and selectivity properties. To understand and predict the electrochemical response of these arrays, a theoretical framework is needed. Cyclic voltammetry is a well-fitted experimental technique to understand the undergoing diffusion and kinetics processes. Previous works describing microelectrode arrays have exploited the interelectrode distance to simulate its behavior as the summation of individual electrodes. This approach becomes limited when the size of the electrodes decreases to the nanometer scale due to their strong radial effect with the consequent overlapping of the diffusional fields. In this work, we present a computational model able to simulate the electrochemical behavior of arrays working either as the summation of individual electrodes or being affected by the overlapping of the diffusional fields without previous considerations. Our computational model relays in dividing a regular electrode array in cells. In each of them, there is a central electrode surrounded by neighbor electrodes; these neighbor electrodes are transformed in a ring maintaining the same active electrode area than the summation of the closest neighbor electrodes. Using this axial neighbor symmetry approximation, the problem acquires a cylindrical symmetry, being applicable to any diffusion pattern. The model is validated against micro- and nanoelectrode arrays showing its ability to predict their behavior and therefore to be used as a designing tool.

  5. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    Science.gov (United States)

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  6. Effect of Source, Surfactant, and Deposition Process on Electronic Properties of Nanotube Arrays

    Directory of Open Access Journals (Sweden)

    Dheeraj Jain

    2011-01-01

    Full Text Available The electronic properties of arrays of carbon nanotubes from several different sources differing in the manufacturing process used with a variety of average properties such as length, diameter, and chirality are studied. We used several common surfactants to disperse each of these nanotubes and then deposited them on Si wafers from their aqueous solutions using dielectrophoresis. Transport measurements were performed to compare and determine the effect of different surfactants, deposition processes, and synthesis processes on nanotubes synthesized using CVD, CoMoCAT, laser ablation, and HiPCO.

  7. Programmable trigger for electron pairs in ring image Cherenkov counters

    International Nuclear Information System (INIS)

    Glab, J.; Baur, R.; Manner, R.

    1990-01-01

    This paper describes a programmable trigger processor for the recognition of Cherenkov rings in a RICH counter. It identifies open electron pairs and suppresses close conversion and Dalitz pairs within 20 μs. More generally, the system can be used for correlating pixel images with pattern masks in order to locate all relatively well defined patterns of a certain type. The trigger processor consists of a systolic processor array of 160 x 176, i.e., 28,160 identical processing elements (PEs) that filter out open electron pairs, and a pseudo adder array that determines whether there was at least one such pair. The processor array is assembled of 20 x 22 VLSI chips containing 8 x 8 PEs each. The semi-custom chip has been developed in 2 μ CMOS standard cell technology

  8. State-of-the-art assessment of testing and testability of custom LSI/VLSI circuits. Volume 8: Fault simulation

    Science.gov (United States)

    Breuer, M. A.; Carlan, A. J.

    1982-10-01

    Fault simulation is widely used by industry in such applications as scoring the fault coverage of test sequences and construction of fault dictionaries. For use in testing VLSI circuits a simulator is evaluated by its accuracy, i.e., modelling capability. To be accurate simulators must employ multi-valued logic in order to represent unknown signal values, impedance, signal transitions, etc., circuit delays such as transport rise/fall, inertial, and the fault modes it is capable of handling. Of the three basic fault simulators now in use (parallel, deductive and concurrent) concurrent fault simulation appears most promising.

  9. Lightweight solar array blanket tooling, laser welding and cover process technology

    Science.gov (United States)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  10. Petri Nets

    Indian Academy of Sciences (India)

    GENERAL I ARTICLE ... In Part 1 of this two-part article, we have seen im- ..... mable logic controller and VLSI arrays, office automation systems, workflow management systems, ... complex discrete event and real-time systems; and Petri nets.

  11. Processes and Materials for Flexible PV Arrays

    National Research Council Canada - National Science Library

    Gierow, Paul

    2002-01-01

    .... A parallel incentive for development of flexible PV arrays are the possibilities of synergistic advantages for certain types of spacecraft, in particular the Solar Thermal Propulsion (STP) Vehicle...

  12. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    Science.gov (United States)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  13. The Future Evolution of the Fast TracKer Processing Unit

    CERN Document Server

    Gentsos, Christos; The ATLAS collaboration; Magalotti, Daniel; Bertolucci, Federico; Citraro, Saverio; Kordas, Kostantinos; Nikolaidis, Spyridon

    2016-01-01

    Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker (FTK) at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron machines. We present the future evolution of this technology, for applications in the High Luminosity runs at the Large Hadron Collider (HL-LHC). Data processing speed is achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executed track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories (AM chips). The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, is maintained, but we plan to increase the FPGA parallelism by associating one FPGA to each AM c...

  14. The Future Evolution of the Fast TracKer Processing Unit

    CERN Document Server

    Gentsos, Christos; The ATLAS collaboration; Magalotti, Daniel; Bertolucci, Federico; Citraro, Saverio; Kordas, Kostantinos; Nikolaidis, Spyridon

    2015-01-01

    Real time tracking is a key ingredient for online event selection at hadron colliders. The Silicon Vertex Tracker at the CDF experiment and the Fast Tracker (FTK) at ATLAS are two successful examples of the importance of dedicated hardware to reconstruct full events at hadron machines. We present the future evolution of this technology, for applications in the High Luminosity runs at the Large Hadron Collider (HL-LHC). Data processing speed is achieved with custom VLSI pattern recognition and linearized track fitting executed inside modern FPGAs, exploiting deep pipelining, extensive parallelism, and efficient use of available resources. In the current system, one large FPGA executed track fitting in full resolution inside low resolution candidate tracks found by a set of custom ASIC devices, called Associative Memories (AM chips). The FTK dual structure, based on the cooperation of VLSI AM and programmable FPGAs, is maintained, but we plan to increase the FPGA parallelism by associating one FPGA to each AM c...

  15. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    International Nuclear Information System (INIS)

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-01-01

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns

  16. A parallel VLSI architecture for a digital filter of arbitrary length using Fermat number transforms

    Science.gov (United States)

    Truong, T. K.; Reed, I. S.; Yeh, C. S.; Shao, H. M.

    1982-01-01

    A parallel architecture for computation of the linear convolution of two sequences of arbitrary lengths using the Fermat number transform (FNT) is described. In particular a pipeline structure is designed to compute a 128-point FNT. In this FNT, only additions and bit rotations are required. A standard barrel shifter circuit is modified so that it performs the required bit rotation operation. The overlap-save method is generalized for the FNT to compute a linear convolution of arbitrary length. A parallel architecture is developed to realize this type of overlap-save method using one FNT and several inverse FNTs of 128 points. The generalized overlap save method alleviates the usual dynamic range limitation in FNTs of long transform lengths. Its architecture is regular, simple, and expandable, and therefore naturally suitable for VLSI implementation.

  17. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  18. Low power digital signal processing

    DEFF Research Database (Denmark)

    Paker, Ozgun

    2003-01-01

    hardwired ASICs and more than 6 21 times lower than current state of the art low-power DSP processors. An orthogonal but practical contribution of this thesis is the test bench implementation. A PCI-based FPGA board has been used to equip a standard desktop PC with tester facilities. The test bench proved...... to be a viable alternative to conventional expensive test equipment. Finally, the work presented in this thesis has been published at several IEEE workshops and conferences, and in the Journal of VLSI Signal Processing....

  19. Increasing the specificity and function of DNA microarrays by processing arrays at different stringencies

    DEFF Research Database (Denmark)

    Dufva, Martin; Petersen, Jesper; Poulsen, Lena

    2009-01-01

    DNA microarrays have for a decade been the only platform for genome-wide analysis and have provided a wealth of information about living organisms. DNA microarrays are processed today under one condition only, which puts large demands on assay development because all probes on the array need to f...

  20. VLSI Design of a Variable-Length FFT/IFFT Processor for OFDM-Based Communication Systems

    Directory of Open Access Journals (Sweden)

    Jen-Chih Kuo

    2003-12-01

    Full Text Available The technique of {orthogonal frequency division multiplexing (OFDM} is famous for its robustness against frequency-selective fading channel. This technique has been widely used in many wired and wireless communication systems. In general, the {fast Fourier transform (FFT} and {inverse FFT (IFFT} operations are used as the modulation/demodulation kernel in the OFDM systems, and the sizes of FFT/IFFT operations are varied in different applications of OFDM systems. In this paper, we design and implement a variable-length prototype FFT/IFFT processor to cover different specifications of OFDM applications. The cached-memory FFT architecture is our suggested VLSI system architecture to design the prototype FFT/IFFT processor for the consideration of low-power consumption. We also implement the twiddle factor butterfly {processing element (PE} based on the {{coordinate} rotation digital computer (CORDIC} algorithm, which avoids the use of conventional multiplication-and-accumulation unit, but evaluates the trigonometric functions using only add-and-shift operations. Finally, we implement a variable-length prototype FFT/IFFT processor with TSMC 0.35 μm 1P4M CMOS technology. The simulations results show that the chip can perform (64-2048-point FFT/IFFT operations up to 80 MHz operating frequency which can meet the speed requirement of most OFDM standards such as WLAN, ADSL, VDSL (256∼2K, DAB, and 2K-mode DVB.

  1. An Analogue VLSI Implementation of the Meddis Inner Hair Cell Model

    Directory of Open Access Journals (Sweden)

    Alistair McEwan

    2003-06-01

    Full Text Available The Meddis inner hair cell model is a widely accepted, but computationally intensive computer model of mammalian inner hair cell function. We have produced an analogue VLSI implementation of this model that operates in real time in the current domain by using translinear and log-domain circuits. The circuit has been fabricated on a chip and tested against the Meddis model for (a rate level functions for onset and steady-state response, (b recovery after masking, (c additivity, (d two-component adaptation, (e phase locking, (f recovery of spontaneous activity, and (g computational efficiency. The advantage of this circuit, over other electronic inner hair cell models, is its nearly exact implementation of the Meddis model which can be tuned to behave similarly to the biological inner hair cell. This has important implications on our ability to simulate the auditory system in real time. Furthermore, the technique of mapping a mathematical model of first-order differential equations to a circuit of log-domain filters allows us to implement real-time neuromorphic signal processors for a host of models using the same approach.

  2. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  3. 10 K gate I(2)L and 1 K component analog compatible bipolar VLSI technology - HIT-2

    Science.gov (United States)

    Washio, K.; Watanabe, T.; Okabe, T.; Horie, N.

    1985-02-01

    An advanced analog/digital bipolar VLSI technology that combines on the same chip 2-ns 10 K I(2)L gates with 1 K analog devices is proposed. The new technology, called high-density integration technology-2, is based on a new structure concept that consists of three major techniques: shallow grooved-isolation, I(2)L active layer etching, and I(2)L current gain increase. I(2)L circuits with 80-MHz maximum toggle frequency have developed compatibly with n-p-n transistors having a BV(CE0) of more than 10 V and an f(T) of 5 GHz, and lateral p-n-p transistors having an f(T) of 150 MHz.

  4. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  5. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    International Nuclear Information System (INIS)

    Due-Hansen, J; Poppe, E; Summanwar, A; Jensen, G U; Breivik, L; Wang, D T; Schjølberg-Henriksen, K; Midtbø, K

    2012-01-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging. (paper)

  6. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Choi, Yong, E-mail: ychoi.image@gmail.com [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Hong, Key Jo [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kang, Jihoon [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Jung, Jin Ho [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Huh, Youn Suk [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Lim, Hyun Keong; Kim, Sang Su [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kim, Byung-Tae [Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Chung, Yonghyun [Department of Radiological Science, Yonsei University College of Health Science, 234 Meaji, Heungup Wonju, Kangwon-Do 220-710 (Korea, Republic of)

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm) coupled 1:1 with LYSO scintillators (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm Multiplication-Sign 20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and

  7. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  8. Dependently typed array programs don’t go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2009-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  9. Dependently typed array programs don't go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2008-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  10. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    Science.gov (United States)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  11. A novel low-voltage low-power analogue VLSI implementation of neural networks with on-chip back-propagation learning

    Science.gov (United States)

    Carrasco, Manuel; Garde, Andres; Murillo, Pilar; Serrano, Luis

    2005-06-01

    In this paper a novel design and implementation of a VLSI Analogue Neural Net based on Multi-Layer Perceptron (MLP) with on-chip Back Propagation (BP) learning algorithm suitable for the resolution of classification problems is described. In order to implement a general and programmable analogue architecture, the design has been carried out in a hierarchical way. In this way the net has been divided in synapsis-blocks and neuron-blocks providing an easy method for the analysis. These blocks basically consist on simple cells, which are mainly, the activation functions (NAF), derivatives (DNAF), multipliers and weight update circuits. The analogue design is based on current-mode translinear techniques using MOS transistors working in the weak inversion region in order to reduce both the voltage supply and the power consumption. Moreover, with the purpose of minimizing the noise, offset and distortion of even order, the topologies are fully-differential and balanced. The circuit, named ANNE (Analogue Neural NEt), has been prototyped and characterized as a proof of concept on CMOS AMI-0.5A technology occupying a total area of 2.7mm2. The chip includes two versions of neural nets with on-chip BP learning algorithm, which are respectively a 2-1 and a 2-2-1 implementations. The proposed nets have been experimentally tested using supply voltages from 2.5V to 1.8V, which is suitable for single cell lithium-ion battery supply applications. Experimental results of both implementations included in ANNE exhibit a good performance on solving classification problems. These results have been compared with other proposed Analogue VLSI implementations of Neural Nets published in the literature demonstrating that our proposal is very efficient in terms of occupied area and power consumption.

  12. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Directory of Open Access Journals (Sweden)

    Ying-Lun Chen

    2015-08-01

    Full Text Available A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO, and the feature extraction is carried out by the generalized Hebbian algorithm (GHA. To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  13. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm.

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-08-13

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction.

  14. An Efficient VLSI Architecture for Multi-Channel Spike Sorting Using a Generalized Hebbian Algorithm

    Science.gov (United States)

    Chen, Ying-Lun; Hwang, Wen-Jyi; Ke, Chi-En

    2015-01-01

    A novel VLSI architecture for multi-channel online spike sorting is presented in this paper. In the architecture, the spike detection is based on nonlinear energy operator (NEO), and the feature extraction is carried out by the generalized Hebbian algorithm (GHA). To lower the power consumption and area costs of the circuits, all of the channels share the same core for spike detection and feature extraction operations. Each channel has dedicated buffers for storing the detected spikes and the principal components of that channel. The proposed circuit also contains a clock gating system supplying the clock to only the buffers of channels currently using the computation core to further reduce the power consumption. The architecture has been implemented by an application-specific integrated circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture has lower power consumption and hardware area costs for real-time multi-channel spike detection and feature extraction. PMID:26287193

  15. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    International Nuclear Information System (INIS)

    Tu, K T; Chung, C K

    2016-01-01

    An integrated technology of CO 2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO 2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO 2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO 2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold. (paper)

  16. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    Science.gov (United States)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  17. A dual-directional light-control film with a high-sag and high-asymmetrical-shape microlens array fabricated by a UV imprinting process

    International Nuclear Information System (INIS)

    Lin, Ta-Wei; Liao, Yunn-Shiuan; Chen, Chi-Feng; Yang, Jauh-Jung

    2008-01-01

    A dual-directional light-control film with a high-sag and high-asymmetric-shape long gapless hexagonal microlens array fabricated by an ultra-violent (UV) imprinting process is presented. Such a lens array is designed by ray-tracing simulation and fabricated by a micro-replication process including gray-scale lithography, electroplating process and UV curing. The shape of the designed lens array is similar to that of a near half-cylindrical lens array with a periodical ripple. The measurement results of a prototype show that the incident lights using a collimated LED with the FWHM of dispersion angle, 12°, are diversified differently in short and long axes. The numerical and experimental results show that the FWHMs of the view angle for angular brightness in long and short axis directions through the long hexagonal lens are about 34.3° and 18.1° and 31° and 13°, respectively. Compared with the simulation result, the errors in long and short axes are about 5% and 16%, respectively. Obviously, the asymmetric gapless microlens array can realize the aim of the controlled asymmetric angular brightness. Such a light-control film can be used as a power saving screen compared with convention diffusing film for the application of a rear projection display

  18. X-ray imager using solution processed organic transistor arrays and bulk heterojunction photodiodes on thin, flexible plastic substrate

    NARCIS (Netherlands)

    Gelinck, G.H.; Kumar, A.; Moet, D.; Steen, J.L. van der; Shafique, U.; Malinowski, P.E.; Myny, K.; Rand, B.P.; Simon, M.; Rütten, W.; Douglas, A.; Jorritsma, J.; Heremans, P.L.; Andriessen, H.A.J.M.

    2013-01-01

    We describe the fabrication and characterization of large-area active-matrix X-ray/photodetector array of high quality using organic photodiodes and organic transistors. All layers with the exception of the electrodes are solution processed. Because it is processed on a very thin plastic substrate

  19. Optimal control of stretching process of flexible solar arrays on spacecraft based on a hybrid optimization strategy

    Directory of Open Access Journals (Sweden)

    Qijia Yao

    2017-07-01

    Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method

  20. ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays

    Science.gov (United States)

    Vasile, Stefan; Lipson, Jerold

    2012-01-01

    The objective of this work was to develop a new class of readout integrated circuit (ROIC) arrays to be operated with Geiger avalanche photodiode (GPD) arrays, by integrating multiple functions at the pixel level (smart-pixel or active pixel technology) in 250-nm CMOS (complementary metal oxide semiconductor) processes. In order to pack a maximum of functions within a minimum pixel size, the ROIC array is a full, custom application-specific integrated circuit (ASIC) design using a mixed-signal CMOS process with compact primitive layout cells. The ROIC array was processed to allow assembly in bump-bonding technology with photon-counting infrared detector arrays into 3-D imaging cameras (LADAR). The ROIC architecture was designed to work with either common- anode Si GPD arrays or common-cathode InGaAs GPD arrays. The current ROIC pixel design is hardwired prior to processing one of the two GPD array configurations, and it has the provision to allow soft reconfiguration to either array (to be implemented into the next ROIC array generation). The ROIC pixel architecture implements the Geiger avalanche quenching, bias, reset, and time to digital conversion (TDC) functions in full-digital design, and uses time domain over-sampling (vernier) to allow high temporal resolution at low clock rates, increased data yield, and improved utilization of the laser beam.

  1. Remote online process measurements by a fiber optic diode array spectrometer

    International Nuclear Information System (INIS)

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1986-01-01

    The development of remote online monitors for radioactive process streams is an active research area at the Savannah River Laboratory (SRL). A remote offline spectrophotometric measurement system has been developed and used at the Savannah River Plant (SRP) for the past year to determine the plutonium concentration of process solution samples. The system consists of a commercial diode array spectrophotometer modified with fiber optic cables that allow the instrument to be located remotely from the measurement cell. Recently, a fiber optic multiplexer has been developed for this instrument, which allows online monitoring of five locations sequentially. The multiplexer uses a motorized micrometer to drive one of five sets of optical fibers into the optical path of the instrument. A sixth optical fiber is used as an external reference and eliminates the need to flush out process lines to re-reference the spectrophotometer. The fiber optic multiplexer has been installed in a process prototype facility to monitor uranium loading and breakthrough of ion exchange columns. The design of the fiber optic multiplexer is discussed and data from the prototype facility are presented to demonstrate the capabilities of the measurement system

  2. Microneedle array electrode for human EEG recording.

    NARCIS (Netherlands)

    Lüttge, Regina; van Nieuwkasteele-Bystrova, Svetlana Nikolajevna; van Putten, Michel Johannes Antonius Maria; Vander Sloten, Jos; Verdonck, Pascal; Nyssen, Marc; Haueisen, Jens

    2009-01-01

    Microneedle array electrodes for EEG significantly reduce the mounting time, particularly by circumvention of the need for skin preparation by scrubbing. We designed a new replication process for numerous types of microneedle arrays. Here, polymer microneedle array electrodes with 64 microneedles,

  3. FPGA Implementation of one-dimensional and two-dimensional cellular automata

    International Nuclear Information System (INIS)

    D'Antone, I.

    1999-01-01

    This report describes the hardware implementation of one-dimensional and two-dimensional cellular automata (CAs). After a general introduction to the cellular automata, we consider a one-dimensional CA used to implement pseudo-random techniques in built-in self test for VLSI. Due to the increase in digital ASIC complexity, testing is becoming one of the major costs in the VLSI production. The high electronics complexity, used in particle physics experiments, demands higher reliability than in the past time. General criterions are given to evaluate the feasibility of the circuit used for testing and some quantitative parameters are underlined to optimize the architecture of the cellular automaton. Furthermore, we propose a two-dimensional CA that performs a peak finding algorithm in a matrix of cells mapping a sub-region of a calorimeter. As in a two-dimensional filtering process, the peaks of the energy clusters are found in one evolution step. This CA belongs to Wolfram class II cellular automata. Some quantitative parameters are given to optimize the architecture of the cellular automaton implemented in a commercial field programmable gate array (FPGA)

  4. Process Development And Simulation For Cold Fabrication Of Doubly Curved Metal Plate By Using Line Array Roll Set

    International Nuclear Information System (INIS)

    Shim, D. S.; Jung, C. G.; Seong, D. Y.; Yang, D. Y.; Han, J. M.; Han, M. S.

    2007-01-01

    For effective manufacturing of a doubly curved sheet metal, a novel sheet metal forming process is proposed. The suggested process uses a Line Array Roll Set (LARS) composed of a pair of upper and lower roll assemblies in a symmetric manner. The process offers flexibility as compared with the conventional manufacturing processes, because it does not require any complex-shaped die and loss of material by blank-holding is minimized. LARS allows flexibility of the incremental forming process and adopts the principle of bending deformation, resulting in a slight deformation in thickness. Rolls composed of line array roll sets are divided into a driving roll row and two idle roll rows. The arrayed rolls in the central lines of the upper and lower roll assemblies are motor-driven so that they deform and transfer the sheet metal using friction between the rolls and the sheet metal. The remaining rolls are idle rolls, generating bending deformation with driving rolls. Furthermore, all the rolls are movable in any direction so that they are adaptable to any size or shape of the desired three-dimensional configuration. In the process, the sheet is deformed incrementally as deformation proceeds simultaneously in rolling and transverse directions step by step. Consequently, it can be applied to the fabrication of doubly curved ship hull plates by undergoing several passes. In this work, FEM simulations are carried out for verification of the proposed incremental forming system using the chosen design parameters. Based on the results of the simulation, the relationship between the roll set configuration and the curvature of a sheet metal is determined. The process information such as the forming loads and torques acting on every roll is analyzed as important data for the design and development of the manufacturing system

  5. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  6. A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station

    Science.gov (United States)

    Kwatra, S. C.; King, Brent

    1995-01-01

    This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.

  7. Acoustic array systems theory, implementation, and application

    CERN Document Server

    Bai, Mingsian R; Benesty, Jacob

    2013-01-01

    Presents a unified framework of far-field and near-field array techniques for noise source identification and sound field visualization, from theory to application. Acoustic Array Systems: Theory, Implementation, and Application provides an overview of microphone array technology with applications in noise source identification and sound field visualization. In the comprehensive treatment of microphone arrays, the topics covered include an introduction to the theory, far-field and near-field array signal processing algorithms, practical implementations, and common applic

  8. Solar array flight dynamic experiment

    Science.gov (United States)

    Schock, Richard W.

    1987-01-01

    The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures' dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-41D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.

  9. Low-voltage current-mode CMOS building blocks for field programmable analog arrays and application

    International Nuclear Information System (INIS)

    Madian, A.H.K.

    2007-01-01

    The role of analog integrated circuits in modem electronic systems remains important, even though digital circuits dominate the market for VLSI solutions. Analog systems have always played an essential role in interfacing digital electronics to the real world in applications such as analog signal processing and signal conditioning .Industrial process and motion control and biomedical measurements . In addition, analog solutions are becoming increasingly competitive with digital circuits for dense, low-power, high-speed applications in low-precision signal-processing. Because of the wide variety of analog functions required in electronic systems and the complexity of the signals (frequency, time, signal levels, parasitic), analog system design is very specialized and supported by a diverse set of CAD tools that are more difficult to integrate than those required for digital design. The drive towards shorter design cycles for analog integrated circuits has demanded the development of high performance analog circuits that are re configurable and suitable for CAD methodologies. the researcher here try to contribute in this filed

  10. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  11. Dumand-array data-acquisition system

    International Nuclear Information System (INIS)

    Brenner, A.E.; Theriot, D.; Dau, W.D.; Geelhood, B.D.; Harris, F.; Learned, J.G.; Stenger, V.; March, R.; Roos, C.; Shumard, E.

    1982-04-01

    An overall data acquisition approach for DUMAND is described. The scheme assumes one array to shore optical fiber transmission line for each string of the array. The basic event sampling period is approx. 13 μsec. All potentially interesting data is transmitted to shore where the major processing is performed

  12. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    Directory of Open Access Journals (Sweden)

    Wu Wei

    2008-01-01

    Full Text Available AbstractFabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  13. In situ synthesis of protein arrays.

    Science.gov (United States)

    He, Mingyue; Stoevesandt, Oda; Taussig, Michael J

    2008-02-01

    In situ or on-chip protein array methods use cell free expression systems to produce proteins directly onto an immobilising surface from co-distributed or pre-arrayed DNA or RNA, enabling protein arrays to be created on demand. These methods address three issues in protein array technology: (i) efficient protein expression and availability, (ii) functional protein immobilisation and purification in a single step and (iii) protein on-chip stability over time. By simultaneously expressing and immobilising many proteins in parallel on the chip surface, the laborious and often costly processes of DNA cloning, expression and separate protein purification are avoided. Recently employed methods reviewed are PISA (protein in situ array) and NAPPA (nucleic acid programmable protein array) from DNA and puromycin-mediated immobilisation from mRNA.

  14. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  15. Phased arrays techniques and split spectrum processing for inspection of thick titanium casting components

    International Nuclear Information System (INIS)

    Banchet, J.; Chahbaz, A.; Sicard, R.; Zellouf, D.E.

    2003-01-01

    In aircraft structures, titanium parts and engine members are critical structural components, and their inspection crucial. However, these structures are very difficult to inspect ultrasonically because of their large grain structure that increases noise drastically. In this work, phased array inspection setups were developed to detected small defects such as simulated inclusions and porosity contained in thick titanium casting blocks, which are frequently used in the aerospace industry. A Cut Spectrum Processing (CSP)-based algorithm was then implemented on the acquired data by employing a set of parallel bandpass filters with different center frequencies. This process led in substantial improvement of the signal to noise ratio and thus, of detectability

  16. Driving a car with custom-designed fuzzy inferencing VLSI chips and boards

    Science.gov (United States)

    Pin, Francois G.; Watanabe, Yutaka

    1993-01-01

    Vehicle control in a-priori unknown, unpredictable, and dynamic environments requires many calculational and reasoning schemes to operate on the basis of very imprecise, incomplete, or unreliable data. For such systems, in which all the uncertainties can not be engineered away, approximate reasoning may provide an alternative to the complexity and computational requirements of conventional uncertainty analysis and propagation techniques. Two types of computer boards including custom-designed VLSI chips were developed to add a fuzzy inferencing capability to real-time control systems. All inferencing rules on a chip are processed in parallel, allowing execution of the entire rule base in about 30 microseconds, and therefore, making control of 'reflex-type' of motions envisionable. The use of these boards and the approach using superposition of elemental sensor-based behaviors for the development of qualitative reasoning schemes emulating human-like navigation in a-priori unknown environments are first discussed. Then how the human-like navigation scheme implemented on one of the qualitative inferencing boards was installed on a test-bed platform to investigate two control modes for driving a car in a-priori unknown environments on the basis of sparse and imprecise sensor data is described. In the first mode, the car navigates fully autonomously, while in the second mode, the system acts as a driver's aid providing the driver with linguistic (fuzzy) commands to turn left or right and speed up or slow down depending on the obstacles perceived by the sensors. Experiments with both modes of control are described in which the system uses only three acoustic range (sonar) sensor channels to perceive the environment. Simulation results as well as indoors and outdoors experiments are presented and discussed to illustrate the feasibility and robustness of autonomous navigation and/or safety enhancing driver's aid using the new fuzzy inferencing hardware system and some human

  17. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  18. Solution-Processed Wide-Bandgap Organic Semiconductor Nanostructures Arrays for Nonvolatile Organic Field-Effect Transistor Memory.

    Science.gov (United States)

    Li, Wen; Guo, Fengning; Ling, Haifeng; Liu, Hui; Yi, Mingdong; Zhang, Peng; Wang, Wenjun; Xie, Linghai; Huang, Wei

    2018-01-01

    In this paper, the development of organic field-effect transistor (OFET) memory device based on isolated and ordered nanostructures (NSs) arrays of wide-bandgap (WBG) small-molecule organic semiconductor material [2-(9-(4-(octyloxy)phenyl)-9H-fluoren-2-yl)thiophene]3 (WG 3 ) is reported. The WG 3 NSs are prepared from phase separation by spin-coating blend solutions of WG 3 /trimethylolpropane (TMP), and then introduced as charge storage elements for nonvolatile OFET memory devices. Compared to the OFET memory device with smooth WG 3 film, the device based on WG 3 NSs arrays exhibits significant improvements in memory performance including larger memory window (≈45 V), faster switching speed (≈1 s), stable retention capability (>10 4 s), and reliable switching properties. A quantitative study of the WG 3 NSs morphology reveals that enhanced memory performance is attributed to the improved charge trapping/charge-exciton annihilation efficiency induced by increased contact area between the WG 3 NSs and pentacene layer. This versatile solution-processing approach to preparing WG 3 NSs arrays as charge trapping sites allows for fabrication of high-performance nonvolatile OFET memory devices, which could be applicable to a wide range of WBG organic semiconductor materials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Cantilever arrays with self-aligned nanotips of uniform height

    International Nuclear Information System (INIS)

    Koelmans, W W; Peters, T; Berenschot, E; De Boer, M J; Siekman, M H; Abelmann, L

    2012-01-01

    Cantilever arrays are employed to increase the throughput of imaging and manipulation at the nanoscale. We present a fabrication process to construct cantilever arrays with nanotips that show a uniform tip–sample distance. Such uniformity is crucial, because in many applications the cantilevers do not feature individual tip–sample spacing control. Uniform cantilever arrays lead to very similar tip–sample interaction within an array, enable non-contact modes for arrays and give better control over the load force in contact modes. The developed process flow uses a single mask to define both tips and cantilevers. An additional mask is required for the back side etch. The tips are self-aligned in the convex corner at the free end of each cantilever. Although we use standard optical contact lithography, we show that the convex corner can be sharpened to a nanometre scale radius by an isotropic underetch step. The process is robust and wafer-scale. The resonance frequencies of the cantilevers within an array are shown to be highly uniform with a relative standard error of 0.26% or lower. The tip–sample distance within an array of up to ten cantilevers is measured to have a standard error around 10 nm. An imaging demonstration using the AFM shows that all cantilevers in the array have a sharp tip with a radius below 10 nm. The process flow for the cantilever arrays finds application in probe-based nanolithography, probe-based data storage, nanomanufacturing and parallel scanning probe microscopy. (paper)

  20. An analog VLSI chip emulating polarization vision of Octopus retina.

    Science.gov (United States)

    Momeni, Massoud; Titus, Albert H

    2006-01-01

    Biological systems provide a wealth of information which form the basis for human-made artificial systems. In this work, the visual system of Octopus is investigated and its polarization sensitivity mimicked. While in actual Octopus retina, polarization vision is mainly based on the orthogonal arrangement of its photoreceptors, our implementation uses a birefringent micropolarizer made of YVO4 and mounted on a CMOS chip with neuromorphic circuitry to process linearly polarized light. Arranged in an 8 x 5 array with two photodiodes per pixel, each consuming typically 10 microW, this circuitry mimics both the functionality of individual Octopus retina cells by computing the state of polarization and the interconnection of these cells through a bias-controllable resistive network.

  1. Application of Seismic Array Processing to Tsunami Early Warning

    Science.gov (United States)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  2. Cas4-Dependent Prespacer Processing Ensures High-Fidelity Programming of CRISPR Arrays.

    Science.gov (United States)

    Lee, Hayun; Zhou, Yi; Taylor, David W; Sashital, Dipali G

    2018-04-05

    CRISPR-Cas immune systems integrate short segments of foreign DNA as spacers into the host CRISPR locus to provide molecular memory of infection. Cas4 proteins are widespread in CRISPR-Cas systems and are thought to participate in spacer acquisition, although their exact function remains unknown. Here we show that Bacillus halodurans type I-C Cas4 is required for efficient prespacer processing prior to Cas1-Cas2-mediated integration. Cas4 interacts tightly with the Cas1 integrase, forming a heterohexameric complex containing two Cas1 dimers and two Cas4 subunits. In the presence of Cas1 and Cas2, Cas4 processes double-stranded substrates with long 3' overhangs through site-specific endonucleolytic cleavage. Cas4 recognizes PAM sequences within the prespacer and prevents integration of unprocessed prespacers, ensuring that only functional spacers will be integrated into the CRISPR array. Our results reveal the critical role of Cas4 in maintaining fidelity during CRISPR adaptation, providing a structural and mechanistic model for prespacer processing and integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Modular Matrix Multiplication on a Linear Array.

    Science.gov (United States)

    1983-11-01

    is fl(n2). 2 Case e Irl __ (see Figure 5.2) 2 2 ,1 Y, " X2v- ’ Y2 -. x= -- ~ Y4 "i; Yin Figure 5Ŗ At t--xi, either all Gk, such that IkEA , have n...nat and Image Proceuing, IEEE Transactions on Computers, Vol. C-31, No. 10 22 (October, 1982), pp. IO0oo09. [41 H.T. Kung, Let’s Design Algorithms for...VLSI Systems, Proc. Caltech Conf. on Very Large Scale Integration: Architecture, Design , Fabrication (January, 1979), pp. 65. 90. 151 H.T. Kung, and

  4. Fully Integrated Linear Single Photon Avalanche Diode (SPAD) Array with Parallel Readout Circuit in a Standard 180 nm CMOS Process

    Science.gov (United States)

    Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.

    2011-05-01

    This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.

  5. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    International Nuclear Information System (INIS)

    Koohbor, M.; Soltanian, S.; Najafi, M.; Servati, P.

    2012-01-01

    Highlights: ► Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. ► Increasing the Zn concentration significantly reduces the Hc value of NWs. ► Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. ► The pH of electrolyte has no significant effect on the properties of the NW arrays. ► Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co 1−x Zn x (0 ≤ x ≤ 0.74) nanowires (NWs) with diameters of ∼35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 °C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on the magnetic properties of the NW arrays. The changes in magnetic property of NWs are rooted in a competition between shape anisotropy and

  6. Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues).

    Science.gov (United States)

    Dante, V; Del Giudice, P; Mattia, M

    2001-01-01

    We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a special purpose, asynchronous bus for communication among neuromorphic chips; a short and preliminary account of an application-oriented device, taking advantage of the above communication infrastructure.

  7. Beam pattern improvement by compensating array nonuniformities in a guided wave phased array

    International Nuclear Information System (INIS)

    Kwon, Hyu-Sang; Lee, Seung-Seok; Kim, Jin-Yeon

    2013-01-01

    This paper presents a simple data processing algorithm which can improve the performance of a uniform circular array based on guided wave transducers. The algorithm, being intended to be used with the delay-and-sum beamformer, effectively eliminates the effects of nonuniformities that can significantly degrade the beam pattern. Nonuniformities can arise intrinsically from the array geometry when the circular array is transformed to a linear array for beam steering and extrinsically from unequal conditions of transducers such as element-to-element variations of sensitivity and directivity. The effects of nonuniformities are compensated by appropriately imposing weight factors on the elements in the projected linear array. Different cases are simulated, where the improvements of the beam pattern, especially the level of the highest sidelobe, are clearly seen, and related issues are discussed. An experiment is performed which uses A0 mode Lamb waves in a steel plate, to demonstrate the usefulness of the proposed method. The discrepancy between theoretical and experimental beam patterns is explained by accounting for near-field effects. (paper)

  8. GLOBECOM '84 - Global Telecommunications Conference, Atlanta, GA, November 26-29, 1984, Conference Record. Volume 1

    Science.gov (United States)

    The subjects discussed are related to LSI/VLSI based subscriber transmission and customer access for the Integrated Services Digital Network (ISDN), special applications of fiber optics, ISDN and competitive telecommunication services, technical preparations for the Geostationary-Satellite Orbit Conference, high-capacity statistical switching fabrics, networking and distributed systems software, adaptive arrays and cancelers, synchronization and tracking, speech processing, advances in communication terminals, full-color videotex, and a performance analysis of protocols. Advances in data communications are considered along with transmission network plans and progress, direct broadcast satellite systems, packet radio system aspects, radio-new and developing technologies and applications, the management of software quality, and Open Systems Interconnection (OSI) aspects of telematic services. Attention is given to personal computers and OSI, the role of software reliability measurement in information systems, and an active array antenna for the next-generation direct broadcast satellite.

  9. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array

    Science.gov (United States)

    Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H.

    2012-04-01

    The monitoring of geomorphic processes during extreme climatic events is of a primary interest to estimate their impact on the landscape dynamics. However, available techniques to survey the surface activity do not provide a relevant time and/or space resolution. Furthermore, these methods hardly investigate the dynamics of the events since their detection are made a posteriori. To increase our knowledge of the landscape evolution and the influence of extreme climatic events on a catchment dynamics, we need to develop new tools and procedures. In many past works, it has been shown that seismic signals are relevant to detect and locate surface processes (landslides, debris flows). During the 2010 typhoon season, we deployed a network of 12 seismometers dedicated to monitor the surface processes of the Chenyoulan catchment in Taiwan. We test the ability of a two dimensional array and small inter-stations distances (~ 11 km) to map in continuous and at a catchment-scale the geomorphic activity. The spectral analysis of continuous records shows a high-frequency (> 1 Hz) seismic energy that is coherent with the occurrence of hillslope and river processes. Using a basic detection algorithm and a location approach running on the analysis of seismic amplitudes, we manage to locate the catchment activity. We mainly observe short-time events (> 300 occurrences) associated with debris falls and bank collapses during daily convective storms, where 69% of occurrences are coherent with the time distribution of precipitations. We also identify a couple of debris flows during a large tropical storm. In contrast, the FORMOSAT imagery does not detect any activity, which somehow reflects the lack of extreme climatic conditions during the experiment. However, high resolution pictures confirm the existence of links between most of geomorphic events and existing structures (landslide scars, gullies...). We thus conclude to an activity that is dominated by reactivation processes. It

  10. Flexible eddy current coil arrays

    International Nuclear Information System (INIS)

    Krampfner, Y.; Johnson, D.P.

    1987-01-01

    A novel approach was devised to overcome certain limitations of conventional eddy current testing. The typical single-element hand-wound probe was replaced with a two dimensional array of spirally wound probe elements deposited on a thin, flexible polyimide substrate. This provides full and reliable coverage of the test area and eliminates the need for scanning. The flexible substrate construction of the array allows the probes to conform to irregular part geometries, such as turbine blades and tubing, thereby eliminating the need for specialized probes for each geometry. Additionally, the batch manufacturing process of the array can yield highly uniform and reproducible coil geometries. The array is driven by a portable computer-based eddy current instrument, smartEDDY/sup TM/, capable of two-frequency operation, and offers a great deal of versatility and flexibility due to its software-based architecture. The array is coupled to the instrument via an 80-switch multiplexer that can be configured to address up to 1600 probes. The individual array elements may be addressed in any desired sequence, as defined by the software

  11. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    NARCIS (Netherlands)

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic

  12. Robust working memory in an asynchronously spiking neural network realized in neuromorphic VLSI

    Directory of Open Access Journals (Sweden)

    Massimiliano eGiulioni

    2012-02-01

    Full Text Available We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory of integrate-and-fire (LIF neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of ‘high’ and ‘low’-firing activity. Depending on the overall excitability, transitions to the ‘high’ state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the ‘high’ state retains a working memory of a stimulus until well after its release. In the latter case, ‘high’ states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated corrupted ‘high’ states comprising neurons of both excitatory populations. Within a basin of attraction, the network dynamics corrects such states and re-establishes the prototypical ‘high’ state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  13. Robust Working Memory in an Asynchronously Spiking Neural Network Realized with Neuromorphic VLSI.

    Science.gov (United States)

    Giulioni, Massimiliano; Camilleri, Patrick; Mattia, Maurizio; Dante, Vittorio; Braun, Jochen; Del Giudice, Paolo

    2011-01-01

    We demonstrate bistable attractor dynamics in a spiking neural network implemented with neuromorphic VLSI hardware. The on-chip network consists of three interacting populations (two excitatory, one inhibitory) of leaky integrate-and-fire (LIF) neurons. One excitatory population is distinguished by strong synaptic self-excitation, which sustains meta-stable states of "high" and "low"-firing activity. Depending on the overall excitability, transitions to the "high" state may be evoked by external stimulation, or may occur spontaneously due to random activity fluctuations. In the former case, the "high" state retains a "working memory" of a stimulus until well after its release. In the latter case, "high" states remain stable for seconds, three orders of magnitude longer than the largest time-scale implemented in the circuitry. Evoked and spontaneous transitions form a continuum and may exhibit a wide range of latencies, depending on the strength of external stimulation and of recurrent synaptic excitation. In addition, we investigated "corrupted" "high" states comprising neurons of both excitatory populations. Within a "basin of attraction," the network dynamics "corrects" such states and re-establishes the prototypical "high" state. We conclude that, with effective theoretical guidance, full-fledged attractor dynamics can be realized with comparatively small populations of neuromorphic hardware neurons.

  14. Development of an integrated circuit VLSI used for time measurement and selective read out in the front end electronics of the DIRC for the Babar experience at SLAC; Developpement d'un circuit integre VLSI assurant mesure de temps et lecture selective dans l'electronique frontale du compteur DIRC de l'experience babar a slac

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1999-07-01

    This thesis deals with the design the development and the tests of an integrated circuit VLSI, supplying selective read and time measure for 16 channels. This circuit has been developed for a experiment of particles physics, BABAR, that will take place at SLAC (Stanford Linear Accelerator Center). A first part describes the physical stakes of the experiment, the electronic architecture and the place of the developed circuit in the research program. The second part presents the technical drawings of the circuit, the prototypes leading to the final design and the validity tests. (A.L.B.)

  15. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  16. Arraying proteins by cell-free synthesis.

    Science.gov (United States)

    He, Mingyue; Wang, Ming-Wei

    2007-10-01

    Recent advances in life science have led to great motivation for the development of protein arrays to study functions of genome-encoded proteins. While traditional cell-based methods have been commonly used for generating protein arrays, they are usually a time-consuming process with a number of technical challenges. Cell-free protein synthesis offers an attractive system for making protein arrays, not only does it rapidly converts the genetic information into functional proteins without the need for DNA cloning, but also presents a flexible environment amenable to production of folded proteins or proteins with defined modifications. Recent advancements have made it possible to rapidly generate protein arrays from PCR DNA templates through parallel on-chip protein synthesis. This article reviews current cell-free protein array technologies and their proteomic applications.

  17. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  18. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-10

    circuit boards. A computational electromagnetics software package, FEKO [24], is used to model the antenna arrays, and the RMIM [12] is used to...Symposium on Intelligent Signal Processing and Communications Systems, Chengdu, China, 2010. [24] FEKO Suite 6.3, EM Software & Systems- S.A. (Pty) Ltd...including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services , Directorate for Information Operations and

  19. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  20. Analyzing CMOS/SOS fabrication for LSI arrays

    Science.gov (United States)

    Ipri, A. C.

    1978-01-01

    Report discusses set of design rules that have been developed as result of work with test arrays. Set of optimum dimensions is given that would maximize process output and would correspondingly minimize costs in fabrication of large-scale integration (LSI) arrays.

  1. Proceedings of the Adaptive Sensor Array Processing Workshop (12th) Held in Lexington, MA on 16-18 March 2004 (CD-ROM)

    National Research Council Canada - National Science Library

    James, F

    2004-01-01

    ...: The twelfth annual workshop on Adaptive Sensor Array Processing presented a diverse agenda featuring new work on adaptive methods for communications, radar and sonar, algorithmic challenges posed...

  2. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  3. Mosaic Process for the Fabrication of an Acoustic Transducer Array

    National Research Council Canada - National Science Library

    2005-01-01

    .... Deriving a geometric shape for the array based on the established performance level. Selecting piezoceramic materials based on considerations related to the performance level and derived geometry...

  4. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  5. The Owens Valley Millimeter Array

    International Nuclear Information System (INIS)

    Padin, S.; Scott, S.L.; Woody, D.P.; Scoville, N.Z.; Seling, T.V.

    1991-01-01

    The telescopes and signal processing systems of the Owens Valley Millimeter Array are considered, and improvements in the sensitivity and stability of the instrument are characterized. The instrument can be applied to map sources in the 85 to 115 GHz and 218 to 265 GHz bands with a resolution of about 1 arcsec in the higher frequency band. The operation of the array is fully automated. The current scientific programs for the array encompass high-resolution imaging of protoplanetary/protostellar disk structures, observations of molecular cloud complexes associated with spiral structure in nearby galaxies, and observations of molecular structures in the nuclei of spiral and luminous IRAS galaxies. 9 refs

  6. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    Energy Technology Data Exchange (ETDEWEB)

    Koohbor, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Soltanian, S., E-mail: s.soltanian@gmail.com [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada); Najafi, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Physics, Hamadan University of Technology, Hamadan (Iran, Islamic Republic of); Servati, P. [Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada)

    2012-01-05

    Highlights: Black-Right-Pointing-Pointer Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. Black-Right-Pointing-Pointer Increasing the Zn concentration significantly reduces the Hc value of NWs. Black-Right-Pointing-Pointer Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. Black-Right-Pointing-Pointer The pH of electrolyte has no significant effect on the properties of the NW arrays. Black-Right-Pointing-Pointer Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co{sub 1-x}Zn{sub x} (0 {<=} x {<=} 0.74) nanowires (NWs) with diameters of {approx}35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 Degree-Sign C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on

  7. Computer-aided design of microfluidic very large scale integration (mVLSI) biochips design automation, testing, and design-for-testability

    CERN Document Server

    Hu, Kai; Ho, Tsung-Yi

    2017-01-01

    This book provides a comprehensive overview of flow-based, microfluidic VLSI. The authors describe and solve in a comprehensive and holistic manner practical challenges such as control synthesis, wash optimization, design for testability, and diagnosis of modern flow-based microfluidic biochips. They introduce practical solutions, based on rigorous optimization and formal models. The technical contributions presented in this book will not only shorten the product development cycle, but also accelerate the adoption and further development of modern flow-based microfluidic biochips, by facilitating the full exploitation of design complexities that are possible with current fabrication techniques. Offers the first practical problem formulation for automated control-layer design in flow-based microfluidic biochips and provides a systematic approach for solving this problem; Introduces a wash-optimization method for cross-contamination removal; Presents a design-for-testability (DfT) technique that can achieve 100...

  8. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  9. Big Data Challenges for Large Radio Arrays

    Science.gov (United States)

    Jones, Dayton L.; Wagstaff, Kiri; Thompson, David; D'Addario, Larry; Navarro, Robert; Mattmann, Chris; Majid, Walid; Lazio, Joseph; Preston, Robert; Rebbapragada, Umaa

    2012-01-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields.

  10. Process-morphology scaling relations quantify self-organization in capillary densified nanofiber arrays.

    Science.gov (United States)

    Kaiser, Ashley L; Stein, Itai Y; Cui, Kehang; Wardle, Brian L

    2018-02-07

    Capillary-mediated densification is an inexpensive and versatile approach to tune the application-specific properties and packing morphology of bulk nanofiber (NF) arrays, such as aligned carbon nanotubes. While NF length governs elasto-capillary self-assembly, the geometry of cellular patterns formed by capillary densified NFs cannot be precisely predicted by existing theories. This originates from the recently quantified orders of magnitude lower than expected NF array effective axial elastic modulus (E), and here we show via parametric experimentation and modeling that E determines the width, area, and wall thickness of the resulting cellular pattern. Both experiments and models show that further tuning of the cellular pattern is possible by altering the NF-substrate adhesion strength, which could enable the broad use of this facile approach to predictably pattern NF arrays for high value applications.

  11. Post-Irradiation Examination of Array Targets - Part I

    Energy Technology Data Exchange (ETDEWEB)

    Icenhour, A.S.

    2004-01-23

    During FY 2001, two arrays, each containing seven neptunium-loaded targets, were irradiated at the Advanced Test Reactor in Idaho to examine the influence of multi-target self-shielding on {sup 236}Pu content and to evaluate fission product release data. One array consisted of seven targets that contained 10 vol% NpO{sub 2} pellets, while the other array consisted of seven targets that contained 20 vol % NpO{sub 2} pellets. The arrays were located in the same irradiation facility but were axially separated to minimize the influence of one array on the other. Each target also contained a dosimeter package, which consisted of a small NpO{sub 2} wire that was inside a vanadium container. After completion of irradiation and shipment back to the Oak Ridge National Laboratory, nine of the targets (four from the 10 vol% array and five from the 20 vol% array) were punctured for pressure measurement and measurement of {sup 85}Kr. These nine targets and the associated dosimeters were then chemically processed to measure the residual neptunium, total plutonium production, {sup 238}Pu production, and {sup 236}Pu concentration at discharge. The amount and isotopic composition of fission products were also measured. This report provides the results of the processing and analysis of the nine targets.

  12. Nanofabrication and characterization of ZnO nanorod arrays and branched microrods by aqueous solution route and rapid thermal processing

    International Nuclear Information System (INIS)

    Lupan, Oleg; Chow, Lee; Chai, Guangyu; Roldan, Beatriz; Naitabdi, Ahmed; Schulte, Alfons; Heinrich, Helge

    2007-01-01

    This paper presents an inexpensive and fast fabrication method for one-dimensional (1D) ZnO nanorod arrays and branched two-dimensional (2D), three-dimensional (3D) - nanoarchitectures. Our synthesis technique includes the use of an aqueous solution route and post-growth rapid thermal annealing. It permits rapid and controlled growth of ZnO nanorod arrays of 1D - rods, 2D - crosses, and 3D - tetrapods without the use of templates or seeds. The obtained ZnO nanorods are uniformly distributed on the surface of Si substrates and individual or branched nano/microrods can be easily transferred to other substrates. Process parameters such as concentration, temperature and time, type of substrate and the reactor design are critical for the formation of nanorod arrays with thin diameter and transferable nanoarchitectures. X-ray diffraction, scanning electron microscopy, X-ray photoelectron spectroscopy, transmission electron microscopy and Micro-Raman spectroscopy have been used to characterize the samples

  13. A Soft Computing Approach to Crack Detection and Impact Source Identification with Field-Programmable Gate Array Implementation

    Directory of Open Access Journals (Sweden)

    Arati M. Dixit

    2013-01-01

    Full Text Available The real-time nondestructive testing (NDT for crack detection and impact source identification (CDISI has attracted the researchers from diverse areas. This is apparent from the current work in the literature. CDISI has usually been performed by visual assessment of waveforms generated by a standard data acquisition system. In this paper we suggest an automation of CDISI for metal armor plates using a soft computing approach by developing a fuzzy inference system to effectively deal with this problem. It is also advantageous to develop a chip that can contribute towards real time CDISI. The objective of this paper is to report on efforts to develop an automated CDISI procedure and to formulate a technique such that the proposed method can be easily implemented on a chip. The CDISI fuzzy inference system is developed using MATLAB’s fuzzy logic toolbox. A VLSI circuit for CDISI is developed on basis of fuzzy logic model using Verilog, a hardware description language (HDL. The Xilinx ISE WebPACK9.1i is used for design, synthesis, implementation, and verification. The CDISI field-programmable gate array (FPGA implementation is done using Xilinx’s Spartan 3 FPGA. SynaptiCAD’s Verilog Simulators—VeriLogger PRO and ModelSim—are used as the software simulation and debug environment.

  14. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive......Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  15. Study and Design of Differential Microphone Arrays

    CERN Document Server

    Benesty, Jacob

    2013-01-01

    Microphone arrays have attracted a lot of interest over the last few decades since they have the potential to solve many important problems such as noise reduction/speech enhancement, source separation, dereverberation, spatial sound recording, and source localization/tracking, to name a few. However, the design and implementation of microphone arrays with beamforming algorithms is not a trivial task when it comes to processing broadband signals such as speech. Indeed, in most sensor arrangements, the beamformer tends to have a frequency-dependent response. One exception, perhaps, is the family of differential microphone arrays (DMAs) that have the promise to form frequency-independent responses. Moreover, they have the potential to attain high directional gains with small and compact apertures. As a result, this type of microphone arrays has drawn much research and development attention recently. This book is intended to provide a systematic study of DMAs from a signal processing perspective. The primary obj...

  16. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  17. Deformable wire array: fiber drawn tunable metamaterials

    DEFF Research Database (Denmark)

    Fleming, Simon; Stefani, Alessio; Tang, Xiaoli

    2017-01-01

    By fiber drawing we fabricate a wire array metamaterial, the structure of which can be actively modified. The plasma frequency can be tuned by 50% by compressing the metamaterial; recovers when released and the process can be repeated.......By fiber drawing we fabricate a wire array metamaterial, the structure of which can be actively modified. The plasma frequency can be tuned by 50% by compressing the metamaterial; recovers when released and the process can be repeated....

  18. Copper-encapsulated vertically aligned carbon nanotube arrays.

    Science.gov (United States)

    Stano, Kelly L; Chapla, Rachel; Carroll, Murphy; Nowak, Joshua; McCord, Marian; Bradford, Philip D

    2013-11-13

    A new procedure is described for the fabrication of vertically aligned carbon nanotubes (VACNTs) that are decorated, and even completely encapsulated, by a dense network of copper nanoparticles. The process involves the conformal deposition of pyrolytic carbon (Py-C) to stabilize the aligned carbon-nanotube structure during processing. The stabilized arrays are mildly functionalized using oxygen plasma treatment to improve wettability, and they are then infiltrated with an aqueous, supersaturated Cu salt solution. Once dried, the salt forms a stabilizing crystal network throughout the array. After calcination and H2 reduction, Cu nanoparticles are left decorating the CNT surfaces. Studies were carried out to determine the optimal processing parameters to maximize Cu content in the composite. These included the duration of Py-C deposition and system process pressure as well as the implementation of subsequent and multiple Cu salt solution infiltrations. The optimized procedure yielded a nanoscale hybrid material where the anisotropic alignment from the VACNT array was preserved, and the mass of the stabilized arrays was increased by over 24-fold because of the addition of Cu. The procedure has been adapted for other Cu salts and can also be used for other metal salts altogether, including Ni, Co, Fe, and Ag. The resulting composite is ideally suited for application in thermal management devices because of its low density, mechanical integrity, and potentially high thermal conductivity. Additionally, further processing of the material via pressing and sintering can yield consolidated, dense bulk composites.

  19. Micromirror array nanostructures for anticounterfeiting applications

    Science.gov (United States)

    Lee, Robert A.

    2004-06-01

    The optical characteristics of pixellated passive micro mirror arrays are derived and applied in the context of their use as reflective optically variable device (OVD) nanostructures for the protection of documents from counterfeiting. The traditional design variables of foil based diffractive OVDs are shown to be able to be mapped to a corresponding set of design parameters for reflective optical micro mirror array (OMMA) devices. The greatly increased depth characteristics of micro mirror array OVDs provides an opportunity for directly printing the OVD microstructure onto the security document in-line with the normal printing process. The micro mirror array OVD architecture therefore eliminates the need for hot stamping foil as the carrier of the OVD information, thereby reducing costs. The origination of micro mirror array devices via a palette based data format and a combination electron beam lithography and photolithography techniques is discussed via an artwork example and experimental tests. Finally the application of the technology to the design of a generic class of devices which have the interesting property of allowing for both application and customer specific OVD image encoding and data encoding at the end user stage of production is described. Because of the end user nature of the image and data encoding process these devices are particularly well suited to ID document applications and for this reason we refer this new OVD concept as biometric OVD technology.

  20. Si Wire-Array Solar Cells

    Science.gov (United States)

    Boettcher, Shannon

    2010-03-01

    Micron-scale Si wire arrays are three-dimensional photovoltaic absorbers that enable orthogonalization of light absorption and carrier collection and hence allow for the utilization of relatively impure Si in efficient solar cell designs. The wire arrays are grown by a vapor-liquid-solid-catalyzed process on a crystalline (111) Si wafer lithographically patterned with an array of metal catalyst particles. Following growth, such arrays can be embedded in polymethyldisiloxane (PDMS) and then peeled from the template growth substrate. The result is an unusual photovoltaic material: a flexible, bendable, wafer-thickness crystalline Si absorber. In this paper I will describe: 1. the growth of high-quality Si wires with controllable doping and the evaluation of their photovoltaic energy-conversion performance using a test electrolyte that forms a rectifying conformal semiconductor-liquid contact 2. the observation of enhanced absorption in wire arrays exceeding the conventional light trapping limits for planar Si cells of equivalent material thickness and 3. single-wire and large-area solid-state Si wire-array solar cell results obtained to date with directions for future cell designs based on optical and device physics. In collaboration with Michael Kelzenberg, Morgan Putnam, Joshua Spurgeon, Daniel Turner-Evans, Emily Warren, Nathan Lewis, and Harry Atwater, California Institute of Technology.

  1. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    Science.gov (United States)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  2. Microfabricated hollow microneedle array using ICP etcher

    Science.gov (United States)

    Ji, Jing; Tay, Francis E. H.; Miao, Jianmin

    2006-04-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF6/O2 isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases.

  3. Microfabricated hollow microneedle array using ICP etcher

    International Nuclear Information System (INIS)

    Ji Jing; Tay, Francis E H; Miao Jianmin

    2006-01-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF 6 /O 2 isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases

  4. Microfabricated hollow microneedle array using ICP etcher

    Energy Technology Data Exchange (ETDEWEB)

    Ji Jing [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Tay, Francis E H [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Miao Jianmin [MicroMachines Center, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore)

    2006-04-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF{sub 6}/O{sub 2} isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases.

  5. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  6. Configuration Considerations for Low Frequency Arrays

    Science.gov (United States)

    Lonsdale, C. J.

    2005-12-01

    The advance of digital signal processing capabilities has spurred a new effort to exploit the lowest radio frequencies observable from the ground, from ˜10 MHz to a few hundred MHz. Multiple scientifically and technically complementary instruments are planned, including the Mileura Widefield Array (MWA) in the 80-300 MHz range, and the Long Wavelength Array (LWA) in the 20-80 MHz range. The latter instrument will target relatively high angular resolution, and baselines up to a few hundred km. An important practical question for the design of such an array is how to distribute the collecting area on the ground. The answer to this question profoundly affects both cost and performance. In this contribution, the factors which determine the anticipated performance of any such array are examined, paying particular attention to the viability and accuracy of array calibration. It is argued that due to the severity of ionospheric effects in particular, it will be difficult or impossible to achieve routine, high dynamic range imaging with a geographically large low frequency array, unless a large number of physically separate array stations is built. This conclusion is general, is based on the need for adequate sampling of ionospheric irregularities, and is independent of the calibration algorithms and techniques that might be employed. It is further argued that array configuration figures of merit that are traditionally used for higher frequency arrays are inappropriate, and a different set of criteria are proposed.

  7. Improved chemical identification from sensor arrays using intelligent algorithms

    Science.gov (United States)

    Roppel, Thaddeus A.; Wilson, Denise M.

    2001-02-01

    Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.

  8. Next-Generation Microshutter Arrays for Large-Format Imaging and Spectroscopy

    Science.gov (United States)

    Moseley, Samuel; Kutyrev, Alexander; Brown, Ari; Li, Mary

    2012-01-01

    A next-generation microshutter array, LArge Microshutter Array (LAMA), was developed as a multi-object field selector. LAMA consists of small-scaled microshutter arrays that can be combined to form large-scale microshutter array mosaics. Microshutter actuation is accomplished via electrostatic attraction between the shutter and a counter electrode, and 2D addressing can be accomplished by applying an electrostatic potential between a row of shutters and a column, orthogonal to the row, of counter electrodes. Microelectromechanical system (MEMS) technology is used to fabricate the microshutter arrays. The main feature of the microshutter device is to use a set of standard surface micromachining processes for device fabrication. Electrostatic actuation is used to eliminate the need for macromechanical magnet actuating components. A simplified electrostatic actuation with no macro components (e.g. moving magnets) required for actuation and latching of the shutters will make the microshutter arrays robust and less prone to mechanical failure. Smaller-size individual arrays will help to increase the yield and thus reduce the cost and improve robustness of the fabrication process. Reducing the size of the individual shutter array to about one square inch and building the large-scale mosaics by tiling these smaller-size arrays would further help to reduce the cost of the device due to the higher yield of smaller devices. The LAMA development is based on prior experience acquired while developing microshutter arrays for the James Webb Space Telescope (JWST), but it will have different features. The LAMA modular design permits large-format mosaicking to cover a field of view at least 50 times larger than JWST MSA. The LAMA electrostatic, instead of magnetic, actuation enables operation cycles at least 100 times faster and a mass significantly smaller compared to JWST MSA. Also, standard surface micromachining technology will simplify the fabrication process, increasing

  9. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    OpenAIRE

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic illumination pattern, i.e., the light pattern of a single LED, and the setting of LED illumination levels, respectively. Thereafter, we propose to use the relative mean squared error as the cost function ...

  10. Automated installation methods for photovoltaic arrays

    Science.gov (United States)

    Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.

    1982-11-01

    Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.

  11. Phased Array Ultrasonic Inspection of Titanium Forgings

    International Nuclear Information System (INIS)

    Howard, P.; Klaassen, R.; Kurkcu, N.; Barshinger, J.; Chalek, C.; Nieters, E.; Sun, Zongqi; Fromont, F. de

    2007-01-01

    Aerospace forging inspections typically use multiple, subsurface-focused sound beams in combination with digital C-scan image acquisition and display. Traditionally, forging inspections have been implemented using multiple single element, fixed focused transducers. Recent advances in phased array technology have made it possible to perform an equivalent inspection using a single phased array transducer. General Electric has developed a system to perform titanium forging inspection based on medical phased array technology and advanced image processing techniques. The components of that system and system performance for titanium inspection will be discussed

  12. Timed arrays wideband and time varying antenna arrays

    CERN Document Server

    Haupt, Randy L

    2015-01-01

    Introduces timed arrays and design approaches to meet the new high performance standards The author concentrates on any aspect of an antenna array that must be viewed from a time perspective. The first chapters briefly introduce antenna arrays and explain the difference between phased and timed arrays. Since timed arrays are designed for realistic time-varying signals and scenarios, the book also reviews wideband signals, baseband and passband RF signals, polarization and signal bandwidth. Other topics covered include time domain, mutual coupling, wideband elements, and dispersion. The auth

  13. Low-Noise CMOS Circuits for On-Chip Signal Processing in Focal-Plane Arrays

    Science.gov (United States)

    Pain, Bedabrata

    The performance of focal-plane arrays can be significantly enhanced through the use of on-chip signal processing. Novel, in-pixel, on-focal-plane, analog signal-processing circuits for high-performance imaging are presented in this thesis. The presence of a high background-radiation is a major impediment for infrared focal-plane array design. An in-pixel, background-suppression scheme, using dynamic analog current memory circuit, is described. The scheme also suppresses spatial noise that results from response non-uniformities of photo-detectors, leading to background limited infrared detector readout performance. Two new, low-power, compact, current memory circuits, optimized for operation at ultra-low current levels required in infrared-detection, are presented. The first one is a self-cascading current memory that increases the output impedance, and the second one is a novel, switch feed-through reducing current memory, implemented using error-current feedback. This circuit can operate with a residual absolute -error of less than 0.1%. The storage-time of the memory is long enough to also find applications in neural network circuits. In addition, a voltage-mode, accurate, low-offset, low-power, high-uniformity, random-access sample-and-hold cell, implemented using a CCD with feedback, is also presented for use in background-suppression and neural network applications. A new, low noise, ultra-low level signal readout technique, implemented by individually counting photo-electrons within the detection pixel, is presented. The output of each unit-cell is a digital word corresponding to the intensity of the photon flux, and the readout is noise free. This technique requires the use of unit-cell amplifiers that feature ultra-high-gain, low-power, self-biasing capability and noise in sub-electron levels. Both single-input and differential-input implementations of such amplifiers are investigated. A noise analysis technique is presented for analyzing sampled

  14. Lithography-free centimeter-long nanochannel fabrication method using an electrospun nanofiber array

    International Nuclear Information System (INIS)

    Park, Suk Hee; Shin, Hyun-Jun; Lee, Sangyoup; Kim, Yong-Hwan; Yang, Dong-Yol; Lee, Jong-Chul

    2012-01-01

    Novel cost-effective methods for polymeric and metallic nanochannel fabrication have been demonstrated using an electrospun nanofiber array. Like other electrospun nanofiber-based nanofabrication methods, our system also showed high throughput as well as cost-effective performances. Unlike other systems, however, our fabrication scheme provides a pseudo-parallel nanofiber array a few centimeters long at a speed of several tens of fibers per second based on our unique inclined-gap fiber collecting system. Pseudo-parallel nanofiber arrays were used either directly for the PDMS molding process or for the metal lift-off process followed by the SiO 2 deposition process to produce the nanochannel array. While the PDMS molding process was a simple fabrication based on one-step casting, the metal lift-off process followed by SiO 2 deposition allowed finetuning on height and width of nanogrooves down to subhundred nanometers from a few micrometers. Nanogrooves were covered either with cover glass or with PDMS slab and nanochannel connectivity was investigated with a fluorescent dye. Also, nanochannel arrays were used to investigate mobility and conformations of λ-DNA. (paper)

  15. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits.

    Directory of Open Access Journals (Sweden)

    Danielle S Bassett

    2010-04-01

    Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

  16. DNA electrophoresis through microlithographic arrays

    International Nuclear Information System (INIS)

    Sevick, E.M.; Williams, D.R.M.

    1996-01-01

    Electrophoresis is one of the most widely used techniques in biochemistry and genetics for size-separating charged molecular chains such as DNA or synthetic polyelectrolytes. The separation is achieved by driving the chains through a gel with an external electric field. As a result of the field and the obstacles that the medium provides, the chains have different mobilities and are physically separated after a given process time. The macroscopically observed mobility scales inversely with chain size: small molecules move through the medium quickly while larger molecules move more slowly. However, electrophoresis remains a tool that has yet to be optimised for most efficient size separation of polyelectrolytes, particularly large polyelectrolytes, e.g. DNA in excess of 30-50 kbp. Microlithographic arrays etched with an ordered pattern of obstacles provide an attractive alternative to gel media and provide wider avenues for size separation of polyelectrolytes and promote a better understanding of the separation process. Its advantages over gels are (1) the ordered array is durable and can be re-used, (2) the array morphology is ordered and can be standardized for specific separation, and (3) calibration with a marker polyelectrolyte is not required as the array is reproduced to high precision. Most importantly, the array geometry can be graduated along the chip so as to expand the size-dependent regime over larger chain lengths and postpone saturation. In order to predict the effect of obstacles upon the chain-length dependence in mobility and hence, size separation, we study the dynamics of single chains using theory and simulation. We present recent work describing: 1) the release kinetics of a single DNA molecule hooked around a point, frictionless obstacle and in both weak and strong field limits, 2) the mobility of a chain impinging upon point obstacles in an ordered array of obstacles, demonstrating the wide range of interactions possible between the chain and

  17. Integration of Antibody Array Technology into Drug Discovery and Development.

    Science.gov (United States)

    Huang, Wei; Whittaker, Kelly; Zhang, Huihua; Wu, Jian; Zhu, Si-Wei; Huang, Ruo-Pan

    Antibody arrays represent a high-throughput technique that enables the parallel detection of multiple proteins with minimal sample volume requirements. In recent years, antibody arrays have been widely used to identify new biomarkers for disease diagnosis or prognosis. Moreover, many academic research laboratories and commercial biotechnology companies are starting to apply antibody arrays in the field of drug discovery. In this review, some technical aspects of antibody array development and the various platforms currently available will be addressed; however, the main focus will be on the discussion of antibody array technologies and their applications in drug discovery. Aspects of the drug discovery process, including target identification, mechanisms of drug resistance, molecular mechanisms of drug action, drug side effects, and the application in clinical trials and in managing patient care, which have been investigated using antibody arrays in recent literature will be examined and the relevance of this technology in progressing this process will be discussed. Protein profiling with antibody array technology, in addition to other applications, has emerged as a successful, novel approach for drug discovery because of the well-known importance of proteins in cell events and disease development.

  18. Backshort-Under-Grid arrays for infrared astronomy

    Science.gov (United States)

    Allen, C. A.; Benford, D. J.; Chervenak, J. A.; Chuss, D. T.; Miller, T. M.; Moseley, S. H.; Staguhn, J. G.; Wollack, E. J.

    2006-04-01

    We are developing a kilopixel, filled bolometer array for space infrared astronomy. The array consists of three individual components, to be merged into a single, working unit; (1) a transition edge sensor bolometer array, operating in the milliKelvin regime, (2) a quarter-wave backshort grid, and (3) superconducting quantum interference device multiplexer readout. The detector array is designed as a filled, square grid of suspended, silicon bolometers with superconducting sensors. The backshort arrays are fabricated separately and will be positioned in the cavities created behind each detector during fabrication. The grids have a unique interlocking feature machined into the walls for positioning and mechanical stability. The spacing of the backshort beneath the detector grid can be set from ˜30 300 μm, by independently adjusting two process parameters during fabrication. The ultimate goal is to develop a large-format array architecture with background-limited sensitivity, suitable for a wide range of wavelengths and applications, to be directly bump bonded to a multiplexer circuit. We have produced prototype two-dimensional arrays having 8×8 detector elements. We present detector design, fabrication overview, and assembly technologies.

  19. Detailed Diagnostics of the BIOMASS Feed Array Prototype

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Pontoppidan, K.

    2013-01-01

    of the array had a significant influence on the measured feed pattern. The 3D reconstruction and further post-processing is therefore applied both to the feed array measured data, and a set of simulated data generated by the GRASP software which replicate the series of measurements. The results...

  20. Silver Nanowire Arrays : Fabrication and Applications

    OpenAIRE

    Feng, Yuyi

    2016-01-01

    Nanowire arrays have increasingly received attention for their use in a variety of applications such as surface-enhanced Raman scattering (SERS), plasmonic sensing, and electrodes for photoelectric devices. However, until now, large scale fabrication of device-suitable metallic nanowire arrays on supporting substrates has seen very limited success. This thesis describes my work rst on the development of a novel successful processing route for the fabrication of uniform noble metallic (e.g. A...

  1. Compensated readout for high-density MOS-gated memristor crossbar array

    KAUST Repository

    Zidan, Mohammed A.

    2015-01-01

    Leakage current is one of the main challenges facing high-density MOS-gated memristor arrays. In this study, we show that leakage current ruins the memory readout process for high-density arrays, and analyze the tradeoff between the array density and its power consumption. We propose a novel readout technique and its underlying circuitry, which is able to compensate for the transistor leakage-current effect in the high-density gated memristor array.

  2. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    Directory of Open Access Journals (Sweden)

    Peter Kampmann

    2014-04-01

    Full Text Available With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach.

  3. Suppression of 3D coherent noise by areal geophone array; Menteki jushinki array ni yoru sanjigen coherent noise no yokusei

    Energy Technology Data Exchange (ETDEWEB)

    Murayama, R; Nakagami, K; Tanaka, H [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1996-05-01

    For improving the quality of data collected by reflection seismic exploration, a lattice was deployed at one point of a traverse line, and the data therefrom were used to study the 3D coherent noise suppression effect of the areal array. The test was conducted at a Japan National Oil Corporation test field in Kashiwazaki City, Niigata Prefecture. The deployed lattice had 144 vibration receiving points arrayed at intervals of 8m composing an areal array, and 187 vibration generating points arrayed at intervals of 20m extending over 6.5km. Data was collected at the vibration receiving points in the lattice, each point acting independently from the others, and processed for the composition of a large areal array, with the said data from plural vibration receiving points added up therein. As the result of analysis of the records covering the data collected at the receiving points in the lattice, it is noted that an enlarged areal array leads to a higher S/N ratio and that different reflection waves are emphasized when the array direction is changed. 1 ref., 6 figs.

  4. Optimised 'on demand' protein arraying from DNA by cell free expression with the 'DNA to Protein Array' (DAPA) technology.

    Science.gov (United States)

    Schmidt, Ronny; Cook, Elizabeth A; Kastelic, Damjana; Taussig, Michael J; Stoevesandt, Oda

    2013-08-02

    We have previously described a protein arraying process based on cell free expression from DNA template arrays (DNA Array to Protein Array, DAPA). Here, we have investigated the influence of different array support coatings (Ni-NTA, Epoxy, 3D-Epoxy and Polyethylene glycol methacrylate (PEGMA)). Their optimal combination yields an increased amount of detected protein and an optimised spot morphology on the resulting protein array compared to the previously published protocol. The specificity of protein capture was improved using a tag-specific capture antibody on a protein repellent surface coating. The conditions for protein expression were optimised to yield the maximum amount of protein or the best detection results using specific monoclonal antibodies or a scaffold binder against the expressed targets. The optimised DAPA system was able to increase by threefold the expression of a representative model protein while conserving recognition by a specific antibody. The amount of expressed protein in DAPA was comparable to those of classically spotted protein arrays. Reaction conditions can be tailored to suit the application of interest. DAPA represents a cost effective, easy and convenient way of producing protein arrays on demand. The reported work is expected to facilitate the application of DAPA for personalized medicine and screening purposes. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Hydrogen Detection With a Gas Sensor ArrayProcessing and Recognition of Dynamic Responses Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Gwiżdż Patryk

    2015-03-01

    Full Text Available An array consisting of four commercial gas sensors with target specifications for hydrocarbons, ammonia, alcohol, explosive gases has been constructed and tested. The sensors in the array operate in the dynamic mode upon the temperature modulation from 350°C to 500°C. Changes in the sensor operating temperature lead to distinct resistance responses affected by the gas type, its concentration and the humidity level. The measurements are performed upon various hydrogen (17-3000 ppm, methane (167-3000 ppm and propane (167-3000 ppm concentrations at relative humidity levels of 0-75%RH. The measured dynamic response signals are further processed with the Discrete Fourier Transform. Absolute values of the dc component and the first five harmonics of each sensor are analysed by a feed-forward back-propagation neural network. The ultimate aim of this research is to achieve a reliable hydrogen detection despite an interference of the humidity and residual gases.

  6. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.

    Science.gov (United States)

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-12-07

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  7. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  8. rasdaman Array Database: current status

    Science.gov (United States)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which

  9. Simulating the Sky as Seen by the Square Kilometer Array using the MIT Array Performance Simulator (MAPS)

    Science.gov (United States)

    Matthews, Lynn D.; Cappallo, R. J.; Doeleman, S. S.; Fish, V. L.; Lonsdale, C. J.; Oberoi, D.; Wayth, R. B.

    2009-05-01

    The Square Kilometer Array (SKA) is a proposed next-generation radio telescope that will operate at frequencies of 0.1-30 GHz and be 50-100 times more sensitive than existing radio arrays. Meeting the performance goals of this instrument will require innovative new hardware and software developments, a variety of which are now under consideration. Key to evaluating the performance characteristics of proposed SKA designs and testing the feasibility of new data calibration and processing algorithms is the ability to carry out realistic simulations of radio wavelength arrays under a variety of observing conditions. The MIT Array Performance Simulator (MAPS) (http://www.haystack.mit.edu/ast/arrays/maps/index.html) is an observations simulation package designed to achieve this goal. MAPS accepts an input source list or sky model and generates a model visibility set for a user-defined "virtual observatory'', incorporating such factors as array geometry, primary beam shape, field-of-view, and time and frequency resolution. Optionally, effects such as thermal noise, out-of-beam sources, variable station beams, and time/location-dependent ionospheric effects can be included. We will showcase current capabilities of MAPS for SKA applications by presenting results from an analysis of the effects of realistic sky backgrounds on the achievable image fidelity and dynamic range of SKA-like arrays comprising large numbers of small-diameter antennas.

  10. Hardware Algorithm Implementation for Mission Specific Processing

    Science.gov (United States)

    2008-03-01

    knowledge about the VLSI technology and understands VHDL, scripting, and intergrating the script in Cadencersoftware pro- gram or Modelsimr. The main...possible to have a trade off between parallel and serial logic design for the circuit. Power can be saved by using parallization, pipelining, or a

  11. Lie group model neuromorphic geometric engine for real-time terrain reconstruction from stereoscopic aerial photos

    Science.gov (United States)

    Tsao, Thomas R.; Tsao, Doris

    1997-04-01

    In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.

  12. Fabrication of large NbSi bolometer arrays for CMB applications

    International Nuclear Information System (INIS)

    Ukibe, M.; Belier, B.; Camus, Ph.; Dobrea, C.; Dumoulin, L.; Fernandez, B.; Fournier, T.; Guillaudin, O.; Marnieros, S.; Yates, S.J.C.

    2006-01-01

    Future cosmic microwave background experiments for high-resolution anisotropy mapping and polarisation detection require large arrays of bolometers at low temperature. We have developed a process to build arrays of antenna-coupled bolometers for that purpose. With adjustment of the Nb x Si 1-x alloy composition, the array can be made of high impedance or superconductive (TES) sensors

  13. Batch fabrication of disposable screen printed SERS arrays.

    Science.gov (United States)

    Qu, Lu-Lu; Li, Da-Wei; Xue, Jin-Qun; Zhai, Wen-Lei; Fossey, John S; Long, Yi-Tao

    2012-03-07

    A novel facile method of fabricating disposable and highly reproducible surface-enhanced Raman spectroscopy (SERS) arrays using screen printing was explored. The screen printing ink containing silver nanoparticles was prepared and printed on supporting materials by a screen printing process to fabricate SERS arrays (6 × 10 printed spots) in large batches. The fabrication conditions, SERS performance and application of these arrays were systematically investigated, and a detection limit of 1.6 × 10(-13) M for rhodamine 6G could be achieved. Moreover, the screen printed SERS arrays exhibited high reproducibility and stability, the spot-to-spot SERS signals showed that the intensity variation was less than 10% and SERS performance could be maintained over 12 weeks. Portable high-throughput analysis of biological samples was accomplished using these disposable screen printed SERS arrays.

  14. Motion camera based on a custom vision sensor and an FPGA architecture

    Science.gov (United States)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  15. Multi-Beam Radio Frequency (RF) Aperture Arrays Using Multiplierless Approximate Fast Fourier Transform (FFT)

    Science.gov (United States)

    2017-08-01

    Fourier transform, discrete Fourier transform, digital array processing , antenna beamformers 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...125 3.7 Simulation of 2-D Beams Cross Sections .................................................................... 125 3.7.1 8...unlimited. List of Figures Figure Page Figure 1: N-beam Array Processing System using a Linear Array

  16. Microneedles array with biodegradable tips for transdermal drug delivery

    Science.gov (United States)

    Iliescu, Ciprian; Chen, Bangtao; Wei, Jiashen; Tay, Francis E. H.

    2008-12-01

    The paper presented an enhancement solution for transdermal drug delivery using microneedles array with biodegradable tips. The microneedles array was fabricated by using deep reactive ion etching (DRIE) and the biodegradable tips were made to be porous by electrochemical etching process. The porous silicon microneedle tips can greatly enhance the transdermal drug delivery in a minimum invasion, painless, and convenient manner, at the same time; they are breakable and biodegradable. Basically, the main problem of the silicon microneedles consists of broken microneedles tips during the insertion. The solution proposed is to fabricate the microneedle tip from a biodegradable material - porous silicon. The silicon microneedles are fabricated using DRIE notching effect of reflected charges on mask. The process overcomes the difficulty in the undercut control of the tips during the classical isotropic silicon etching process. When the silicon tips were formed, the porous tips were then generated using a classical electrochemical anodization process in MeCN/HF/H2O solution. The paper presents the experimental results of in vitro release of calcein and BSA with animal skins using a microneedle array with biodegradable tips. Compared to the transdermal drug delivery without any enhancer, the microneedle array had presented significant enhancement of drug release.

  17. Hybrid Arrays for Chemical Sensing

    Science.gov (United States)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  18. Self-assembly and optical properties of patterned ZnO nanodot arrays

    International Nuclear Information System (INIS)

    Song Yijian; Zheng Maojun; Ma Li

    2007-01-01

    Patterned ZnO nanodot (ND) arrays and a ND-cavity microstructure were realized on an anodic alumina membrane (AAM) surface through a spin-coating sol-gel process, which benefits from the morphology and localized negative charge surface of AAM as well as the optimized sol concentration. The growth mechanism is believed to be a self-assembly process. This provides a simple approach to fabricate semiconductor quantum dot (QD) arrays and a QD-cavity system with its advantage in low cost and mass production. Strong ultra-violet emission, a multi-phonon process, and its special structure-related properties were observed in the patterned ZnO ND arrays

  19. Multilevel photonic modules for millimeter-wave phased-array antennas

    Science.gov (United States)

    Paolella, Arthur C.; Bauerle, Athena; Joshi, Abhay M.; Wright, James G.; Coryell, Louis A.

    2000-09-01

    Millimeter wave phased array systems have antenna element sizes and spacings similar to MMIC chip dimensions by virtue of the operating wavelength. Designing modules in traditional planar packaing techniques are therefore difficult to implement. An advantageous way to maintain a small module footprint compatible with Ka-Band and high frequency systems is to take advantage of two leading edge technologies, opto- electronic integrated circuits (OEICs) and multilevel packaging technology. Under a Phase II SBIR these technologies are combined to form photonic modules for optically controlled millimeter wave phased array antennas. The proposed module, consisting of an OEIC integrated with a planar antenna array will operate on the 40GHz region. The OEIC consists of an InP based dual-depletion PIN photodetector and distributed amplifier. The multi-level module will be fabricated using an enhanced circuit processing thick film process. Since the modules are batch fabricated using an enhanced circuit processing thick film process. Since the modules are batch fabricated, using standard commercial processes, it has the potential to be low cost while maintaining high performance, impacting both military and commercial communications systems.

  20. Fabrication of large NbSi bolometer arrays for CMB applications

    Energy Technology Data Exchange (ETDEWEB)

    Ukibe, M. [AIST, Tsukuba Central 2, Tsukuba, Ibaraki 305-8568 (Japan); CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Belier, B. [CNRS-IEF, Bat 220, Orsay Campus F-91405 (France); Camus, Ph. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France)]. E-mail: philippe.camus@grenoble.cnrs.fr; Dobrea, C. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Dumoulin, L. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Fernandez, B. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France); Fournier, T. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France); Guillaudin, O. [CNRS-LPSC, 53 avenue des Martyrs, Grenoble F-38042 (France); Marnieros, S. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Yates, S.J.C. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France)

    2006-04-15

    Future cosmic microwave background experiments for high-resolution anisotropy mapping and polarisation detection require large arrays of bolometers at low temperature. We have developed a process to build arrays of antenna-coupled bolometers for that purpose. With adjustment of the Nb{sub x}Si{sub 1-x} alloy composition, the array can be made of high impedance or superconductive (TES) sensors.

  1. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    Science.gov (United States)

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights

  2. Cascadia Subduction Zone Earthquake Source Spectra from an Array of Arrays

    Science.gov (United States)

    Gomberg, J. S.; Vidale, J. E.

    2011-12-01

    It is generally accepted that spectral characteristics distinguish 'slow' seismic sources from those of 'ordinary' or 'fast' earthquakes. To explore this difference, we measure ordinary earthquake spectra of about 30 seismic events located near the Cascadia plate interface where ETS regularly occurs. We separate the affects of local site response, regional propagation (attenuation and spreading), and processes near or at the source for a dense dataset recorded on an array of eight seismic micro-arrays. The arrays have apertures of 1-2 km with 21-31 seismographs in each, and are separated by 10-20 km. We assume that the spectrum of each recorded signal may be described by the product of 1) frequency-dependent site response, 2) propagation effects that include geometric spreading and an exponential decay that varies with distance, frequency, and 3) a frequency-dependent source spectrum. Using more than1000 seismograms from all events recorded at all sites simultaneously, we solve for frequency-dependent site response and source spectra, as well as a single regional Q value. We interpret only the slope of the source terms because most earthquakes have magnitudes less than 0, so we expect that their corner frequencies are higher frequency than the recorded passband. The amplitude variation in the site response within the same array sometimes exceeds a factor of 3, which is consistent with the variation seen visually. We see variability in the slopes of the source spectra comparable to the difference between 'slow' and 'fast' events observed in other studies, and which show a strong correlation with source location. Spectral slopes of spatially clustered sources are nearly identical but usually differ from those of clusters at a distance of a few tens of km, and spectral content varies systematically with location within the distribution of events. While these differences may reflect varying source processes (e.g., rupture velocity, stress drop), the strong correlation

  3. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    Science.gov (United States)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  4. Developing infrared array controller with software real time operating system

    Science.gov (United States)

    Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu

    2008-07-01

    Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.

  5. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-01

    to develop novel co-prime sampling and array design strategies that achieve high-resolution estimation of spectral power distributions and signal...by the array geometry and the frequency offset. We overcome this limitation by introducing a novel sparsity-based multi-target localization approach...estimation using a sparse uniform linear array with two CW signals of co-prime frequencies,” IEEE International Workshop on Computational Advances

  6. X-ray microcalorimeter arrays fabricated by surface micromachining

    International Nuclear Information System (INIS)

    Hilton, G.C.; Beall, J.A.; Deiker, S.; Vale, L.R.; Doriese, W.B.; Beyer, Joern; Ullom, J.N.; Reintsema, C.D.; Xu, Y.; Irwin, K.D.

    2004-01-01

    We are developing arrays of Mo/Cu transition edge sensor-based detectors for use as X-ray microcalorimeters and sub-millimeter bolometers. We have fabricated 8x8 pixel X-ray microcalorimeter arrays using surface micromachining. Surface-micromachining techniques hold the promise of scalability to much larger arrays and may allow for the integration of in-plane multiplexer elements. In this paper we describe the surface micromachining process and recent improvements in the device geometry that provide for increased mechanical strength. We also present X-ray and heat pulse spectra collected using these detectors

  7. Fabrication of microlens arrays using a CO2-assisted embossing technique

    International Nuclear Information System (INIS)

    Huang, Tzu-Chien; Chan, Bin-Da; Ciou, Jyun-Kai; Yang, Sen-Yeu

    2009-01-01

    This paper reports a method to fabricate microlens arrays with a low processing temperature and a low pressure. The method is based on embossing a softened polymeric substrate over a mold with micro-hole arrays. Due to the effect of capillary and surface tension, microlens arrays can be formed. The embossing medium is CO 2 gas, which supplies a uniform pressing pressure so that large-area microlens arrays can be fabricated. CO 2 gas also acts as a solvent to plasticize the polymer substrates. With the special dissolving ability and isotropic pressing capacity of CO 2 gas, microlens arrays can be fabricated at a low temperature (lower than T g ) and free of thermal-induced residual stress. Such a combined mechanism of dissolving and embossing with CO 2 gas makes the fabrication of microlens arrays direct with complex processes, and is more compatible for optical usage. In the study, it is also found that the sag height of microlens changes when different CO 2 dissolving pressure and time are used. This makes it easy to fabricate microlens arrays of different geometries without using different molds. The quality, uniformity and optical property of the fabricated microlens arrays have been verified with measurements of the dimensions, surface smoothness, focal length, transmittance and light intensity through the fabricated microlens arrays

  8. Microfabricated Silicon Microneedle Array for Transdermal Drug Delivery

    International Nuclear Information System (INIS)

    Ji, J; Tay, F E; Miao Jianmin; Iliescu, C

    2006-01-01

    This paper presents developed processes for silicon microneedle arrays microfabrication. Three types of microneedles structures were achieved by isotropic etching in inductively coupled plasma (ICP) using SF 6 /O 2 gases, combination of isotropic etching with deep etching, and wet etching, respectively. A microneedle array with biodegradable porous tips was further developed based on the fabricated microneedles

  9. Microfabricated Silicon Microneedle Array for Transdermal Drug Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Ji, J [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Tay, F E [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Miao Jianmin [MicroMachines Center, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore); Iliescu, C [Institute of Bioengineering and Nanotechnology, 31 Biopolis Way, Nanos, 04-01, 138669 (Singapore)

    2006-04-01

    This paper presents developed processes for silicon microneedle arrays microfabrication. Three types of microneedles structures were achieved by isotropic etching in inductively coupled plasma (ICP) using SF{sub 6}/O{sub 2} gases, combination of isotropic etching with deep etching, and wet etching, respectively. A microneedle array with biodegradable porous tips was further developed based on the fabricated microneedles.

  10. A VLSI-Based High-Performance Raster Image System.

    Science.gov (United States)

    1986-05-08

    and data in broadcast form to the array of memory -hips in the frame buffer, shown in the bottom block. This is simply a physical structure to hold up...Principal Investigator: John Poulton Collaboration on algorithm development: Prof. Jack Goldfeather (Dept. of Mathematics, Carleton Collge ...1983) Cheng-Hong Hsieh (MS, Computer Science, May, 1985) Jeff P. Hultquist Susan Spach Undergraduate ResearLh Assistant: Sonya Holder (BS, Physics , May

  11. Array processors: an introduction to their architecture, software, and applications in nuclear medicine

    International Nuclear Information System (INIS)

    King, M.A.; Doherty, P.W.; Rosenberg, R.J.; Cool, S.L.

    1983-01-01

    Array processors are ''number crunchers'' that dramatically enhance the processing power of nuclear medicine computer systems for applicatons dealing with the repetitive operations involved in digital image processing of large segments of data. The general architecture and the programming of array processors are introduced, along with some applications of array processors to the reconstruction of emission tomographic images, digital image enhancement, and functional image formation

  12. Automated Array Assembly, Phase 2. Quarterly technical progress report, April-June 1979

    Energy Technology Data Exchange (ETDEWEB)

    Carbajal, B.G.

    1979-07-01

    The Automated Array Assembly Task, Phase 2 of the Low Cost Solar Array (LSA) Project is a process development task. This contract provides for the fabrication of modules from large area Tandem Junction Cells (TJC). The key activities in this contract effort are (a) Large Area TJC including cell design, process verification and cell fabrication and (b) Tandem Junction Module (TJM) including definition of the cell-module interfaces, substrate fabrication, interconnect fabrication and module assembly. The overall goal is to advance solar cell module process technology to meet the 1986 goal of a production capability of 500 megawatts per year at a cost of less than $500 per peak kilowatt. This contract will focus on the Tandem Junction Module process. During this quarter, effort was focused on design and process verification. The large area TJC design was completed and the design verification was completed. Process variation experiments led to refinements in the baseline TJC process. Formed steel substrates were porcelainized. Cell array assembly techniques using infrared soldering are being checked out. Dummy cell arrays up to 5 cell by 5 cell have been assembled using all backside contacts.

  13. Processes for design, construction and utilisation of arrays of light-emitting diodes and light-emitting diode-coupled optical fibres for multi-site brain light delivery.

    Science.gov (United States)

    Bernstein, Jacob G; Allen, Brian D; Guerra, Alexander A; Boyden, Edward S

    2015-05-01

    Optogenetics enables light to be used to control the activity of genetically targeted cells in the living brain. Optical fibers can be used to deliver light to deep targets, and LEDs can be spatially arranged to enable patterned light delivery. In combination, arrays of LED-coupled optical fibers can enable patterned light delivery to deep targets in the brain. Here we describe the process flow for making LED arrays and LED-coupled optical fiber arrays, explaining key optical, electrical, thermal, and mechanical design principles to enable the manufacturing, assembly, and testing of such multi-site targetable optical devices. We also explore accessory strategies such as surgical automation approaches as well as innovations to enable low-noise concurrent electrophysiology.

  14. Dynamical analysis of surface-insulated planar wire array Z-pinches

    Science.gov (United States)

    Li, Yang; Sheng, Liang; Hei, Dongwei; Li, Xingwen; Zhang, Jinhai; Li, Mo; Qiu, Aici

    2018-05-01

    The ablation and implosion dynamics of planar wire array Z-pinches with and without surface insulation are compared and discussed in this paper. This paper first presents a phenomenological model named the ablation and cascade snowplow implosion (ACSI) model, which accounts for the ablation and implosion phases of a planar wire array Z-pinch in a single simulation. The comparison between experimental data and simulation results shows that the ACSI model could give a fairly good description about the dynamical characteristics of planar wire array Z-pinches. Surface insulation introduces notable differences in the ablation phase of planar wire array Z-pinches. The ablation phase is divided into two stages: insulation layer ablation and tungsten wire ablation. The two-stage ablation process of insulated wires is simulated in the ACSI model by updating the formulas describing the ablation process.

  15. Area array interconnection handbook

    CERN Document Server

    Totta, Paul A

    2012-01-01

    Microelectronic packaging has been recognized as an important "enabler" for the solid­ state revolution in electronics which we have witnessed in the last third of the twentieth century. Packaging has provided the necessary external wiring and interconnection capability for transistors and integrated circuits while they have gone through their own spectacular revolution from discrete device to gigascale integration. At IBM we are proud to have created the initial, simple concept of flip chip with solder bump connections at a time when a better way was needed to boost the reliability and improve the manufacturability of semiconductors. The basic design which was chosen for SLT (Solid Logic Technology) in the 1960s was easily extended to integrated circuits in the '70s and VLSI in the '80s and '90s. Three I/O bumps have grown to 3000 with even more anticipated for the future. The package families have evolved from thick-film (SLT) to thin-film (metallized ceramic) to co-fired multi-layer ceramic. A later famil...

  16. The data array, a tool to interface the user to a large data base

    Science.gov (United States)

    Foster, G. H.

    1974-01-01

    Aspects of the processing of spacecraft data is considered. Use of the data array in a large address space as an intermediate form in data processing for a large scientific data base is advocated. Techniques for efficient indexing in data arrays are reviewed and the data array method for mapping an arbitrary structure onto linear address space is shown. A compromise between the two forms is given. The impact of the data array on the user interface are considered along with implementation.

  17. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  18. Physical Limitations To Nonuniformity Correction In IR Focal Plane Arrays

    Science.gov (United States)

    Scribner, D. A.; Kruer, M. R.; Gridley, J. C.; Sarkady, K.

    1988-05-01

    Simple nonuniformity correction algorithms currently in use can be severely limited by nonlinear response characteristics of the individual pixels in an IR focal plane array. Although more complicated multi-point algorithms improve the correction process they too can be limited by nonlinearities. Furthermore, analysis of single pixel noise power spectrums usually show some level of 1 /f noise. This in turn causes pixel outputs to drift independent of each other thus causing the spatial noise (often called fixed pattern noise) of the array to increase as a function of time since the last calibration. Measurements are presented for two arrays (a HgCdTe hybrid and a Pt:Si CCD) describing pixel nonlinearities, 1/f noise, and residual spatial noise (after nonuniforming correction). Of particular emphasis is spatial noise as a function of the lapsed time since the last calibration and the calibration process selected. The resulting spatial noise is examined in terms of its effect on the NEAT performance of each array tested and comparisons are made. Finally, a discussion of implications for array developers is given.

  19. A novel method to design sparse linear arrays for ultrasonic phased array.

    Science.gov (United States)

    Yang, Ping; Chen, Bin; Shi, Ke-Ren

    2006-12-22

    In ultrasonic phased array testing, a sparse array can increase the resolution by enlarging the aperture without adding system complexity. Designing a sparse array involves choosing the best or a better configuration from a large number of candidate arrays. We firstly designed sparse arrays by using a genetic algorithm, but found that the arrays have poor performance and poor consistency. So, a method based on the Minimum Redundancy Linear Array was then adopted. Some elements are determined by the minimum-redundancy array firstly in order to ensure spatial resolution and then a genetic algorithm is used to optimize the remaining elements. Sparse arrays designed by this method have much better performance and consistency compared to the arrays designed only by a genetic algorithm. Both simulation and experiment confirm the effectiveness.

  20. CASTOR a VLSI CMOS mixed analog-digital circuit for low noise multichannel counting applications

    International Nuclear Information System (INIS)

    Comes, G.; Loddo, F.; Hu, Y.; Kaplon, J.; Ly, F.; Turchetta, R.; Bonvicini, V.; Vacchi, A.

    1996-01-01

    In this paper we present the design and first experimental results of a VLSI mixed analog-digital 1.2 microns CMOS circuit (CASTOR) for multichannel radiation detectors applications demanding low noise amplification and counting of radiation pulses. This circuit is meant to be connected to pixel-like detectors. Imaging can be obtained by counting the number of hits in each pixel during a user-controlled exposure time. Each channel of the circuit features an analog and a digital part. In the former one, a charge preamplifier is followed by a CR-RC shaper with an output buffer and a threshold discriminator. In the digital part, a 16-bit counter is present together with some control logic. The readout of the counters is done serially on a common tri-state output. Daisy-chaining is possible. A 4-channel prototype has been built. This prototype has been optimised for use in the digital radiography Syrmep experiment at the Elettra synchrotron machine in Trieste (Italy): its main design parameters are: shaping time of about 850 ns, gain of 190 mV/fC and ENC (e - rms)=60+17 C (pF). The counting rate per channel, limited by the analog part, can be as high as about 200 kHz. Characterisation of the circuit and first tests with silicon microstrip detectors are presented. They show the circuit works according to design specification and can be used for imaging applications. (orig.)

  1. Optical technology for microwave applications VI and optoelectronic signal processing for phased-array antennas III; Proceedings of the Meeting, Orlando, FL, Apr. 20-23, 1992

    Science.gov (United States)

    Yao, Shi-Kay; Hendrickson, Brian M.

    The following topics related to optical technology for microwave applications are discussed: advanced acoustooptic devices, signal processing device technologies, optical signal processor technologies, microwave and optomicrowave devices, advanced lasers and sources, wideband electrooptic modulators, and wideband optical communications. The topics considered in the discussion of optoelectronic signal processing for phased-array antennas include devices, signal processing, and antenna systems.

  2. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    Directory of Open Access Journals (Sweden)

    Yuan-Jyun Chang

    2016-12-01

    Full Text Available The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO. The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.

  3. Synthesis and characterization of Mn-doped ZnO column arrays

    International Nuclear Information System (INIS)

    Yang Mei; Guo Zhixing; Qiu Kehui; Long Jianping; Yin Guangfu; Guan Denggao; Liu Sutian; Zhou Shijie

    2010-01-01

    Mn-doped ZnO column arrays were successfully synthesized by conventional sol-gel process. Effect of Mn/Zn atomic ratio and reaction time were investigated, and the morphology, tropism and optical properties of Mn-doped ZnO column arrays were characterized by SEM, XRD and photoluminescence (PL) spectroscopy. The result shows that a Mn/Zn atomic ratio of 0.1 and growth time of 12 h are the optimal condition for the preparation of densely distributed ZnO column arrays. XRD analysis shows that Mn-doped ZnO column arrays are highly c-axis oriented. As for Mn-doped ZnO column arrays, obvious increase of photoluminescence intensity is observed at the wavelength of ∼395 nm and ∼413 nm, compared to pure ZnO column arrays.

  4. Resonance spectra of diabolo optical antenna arrays

    Directory of Open Access Journals (Sweden)

    Hong Guo

    2015-10-01

    Full Text Available A complete set of diabolo optical antenna arrays with different waist widths and periods was fabricated on a sapphire substrate by using a standard e-beam lithography and lift-off process. Fabricated diabolo optical antenna arrays were characterized by measuring the transmittance and reflectance with a microscope-coupled FTIR spectrometer. It was found experimentally that reducing the waist width significantly shifts the resonance to longer wavelength and narrowing the waist of the antennas is more effective than increasing the period of the array for tuning the resonance wavelength. Also it is found that the magnetic field enhancement near the antenna waist is correlated to the shift of the resonance wavelength.

  5. Conducting polymer nanowire arrays for high performance supercapacitors.

    Science.gov (United States)

    Wang, Kai; Wu, Haiping; Meng, Yuena; Wei, Zhixiang

    2014-01-15

    This Review provides a brief summary of the most recent research developments in the fabrication and application of one-dimensional ordered conducting polymers nanostructure (especially nanowire arrays) and their composites as electrodes for supercapacitors. By controlling the nucleation and growth process of polymerization, aligned conducting polymer nanowire arrays and their composites with nano-carbon materials can be prepared by employing in situ chemical polymerization or electrochemical polymerization without a template. This kind of nanostructure (such as polypyrrole and polyaniline nanowire arrays) possesses high capacitance, superior rate capability ascribed to large electrochemical surface, and an optimal ion diffusion path in the ordered nanowire structure, which is proved to be an ideal electrode material for high performance supercapacitors. Furthermore, flexible, micro-scale, threadlike, and multifunctional supercapacitors are introduced based on conducting polyaniline nanowire arrays and their composites. These prototypes of supercapacitors utilize the high flexibility, good processability, and large capacitance of conducting polymers, which efficiently extend the usage of supercapacitors in various situations, and even for a complicated integration system of different electronic devices. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Materials preparation and fabrication of pyroelectric polymer/silicon MOSFET detector arrays. Final report

    International Nuclear Information System (INIS)

    Bloomfield, P.

    1992-01-01

    The authors have delivered several 64-element linear arrays of pyroelectric elements fully integrated on silicon wafers with MOS readout devices. They have delivered detailed drawings of the linear arrays to LANL. They have processed a series of two inch wafers per submitted design. Each two inch wafer contains two 64 element arrays. After spin-coating copolymer onto the arrays, vacuum depositing the top electrodes, and polarizing the copolymer films so as to make them pyroelectrically active, each wafer was split in half. The authors developed a thicker oxide coating separating the extended gate electrode (beneath the polymer detector) from the silicon. This should reduce its parasitic capacitance and hence improve the S/N. They provided LANL three processed 64 element sensor arrays. Each array was affixed to a connector panel and selected solder pads of the common ground, the common source voltage supply connections, the 64 individual drain connections, and the 64 drain connections (for direct pyroelectric sensing response rather than the MOSFET action) were wire bonded to the connector panel solder pads. This entails (64 + 64 + 1 + 1) = 130 possible bond connections per 64 element array. This report now details the processing steps and the progress of the individual wafers as they were carried through from beginning to end

  7. Mathematical analysis of the real time array PCR (RTA PCR) process

    NARCIS (Netherlands)

    Dijksman, Johan Frederik; Pierik, A.

    2012-01-01

    Real time array PCR (RTA PCR) is a recently developed biochemical technique that measures amplification curves (like with quantitative real time Polymerase Chain Reaction (qRT PCR)) of a multitude of different templates in a sample. It combines two different methods in order to profit from the

  8. Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Todd, Richard A. [RIS Corp.; Radford, David C. [ORNL Physics Div.

    2013-12-30

    Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arrays such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.

  9. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  10. Micropatterned arrays of porous silicon: toward sensory biointerfaces.

    Science.gov (United States)

    Flavel, Benjamin S; Sweetman, Martin J; Shearer, Cameron J; Shapter, Joseph G; Voelcker, Nicolas H

    2011-07-01

    We describe the fabrication of arrays of porous silicon spots by means of photolithography where a positive photoresist serves as a mask during the anodization process. In particular, photoluminescent arrays and porous silicon spots suitable for further chemical modification and the attachment of human cells were created. The produced arrays of porous silicon were chemically modified by means of a thermal hydrosilylation reaction that facilitated immobilization of the fluorescent dye lissamine, and alternatively, the cell adhesion peptide arginine-glycine-aspartic acid-serine. The latter modification enabled the selective attachment of human lens epithelial cells on the peptide functionalized regions of the patterns. This type of surface patterning, using etched porous silicon arrays functionalized with biological recognition elements, presents a new format of interfacing porous silicon with mammalian cells. Porous silicon arrays with photoluminescent properties produced by this patterning strategy also have potential applications as platforms for in situ monitoring of cell behavior.

  11. Developing barbed microtip-based electrode arrays for biopotential measurement.

    Science.gov (United States)

    Hsu, Li-Sheng; Tung, Shu-Wei; Kuo, Che-Hsi; Yang, Yao-Joe

    2014-07-10

    This study involved fabricating barbed microtip-based electrode arrays by using silicon wet etching. KOH anisotropic wet etching was employed to form a standard pyramidal microtip array and HF/HNO3 isotropic etching was used to fabricate barbs on these microtips. To improve the electrical conductance between the tip array on the front side of the wafer and the electrical contact on the back side, a through-silicon via was created during the wet etching process. The experimental results show that the forces required to detach the barbed microtip arrays from human skin, a polydimethylsiloxane (PDMS) polymer, and a polyvinylchloride (PVC) film were larger compared with those required to detach microtip arrays that lacked barbs. The impedances of the skin-electrode interface were measured and the performance levels of the proposed dry electrode were characterized. Electrode prototypes that employed the proposed tip arrays were implemented. Electroencephalogram (EEG) and electrocardiography (ECG) recordings using these electrode prototypes were also demonstrated.

  12. Developing Barbed Microtip-Based Electrode Arrays for Biopotential Measurement

    Directory of Open Access Journals (Sweden)

    Li-Sheng Hsu

    2014-07-01

    Full Text Available This study involved fabricating barbed microtip-based electrode arrays by using silicon wet etching. KOH anisotropic wet etching was employed to form a standard pyramidal microtip array and HF/HNO3 isotropic etching was used to fabricate barbs on these microtips. To improve the electrical conductance between the tip array on the front side of the wafer and the electrical contact on the back side, a through-silicon via was created during the wet etching process. The experimental results show that the forces required to detach the barbed microtip arrays from human skin, a polydimethylsiloxane (PDMS polymer, and a polyvinylchloride (PVC film were larger compared with those required to detach microtip arrays that lacked barbs. The impedances of the skin-electrode interface were measured and the performance levels of the proposed dry electrode were characterized. Electrode prototypes that employed the proposed tip arrays were implemented. Electroencephalogram (EEG and electrocardiography (ECG recordings using these electrode prototypes were also demonstrated.

  13. Sonochemically Fabricated Microelectrode Arrays for Use as Sensing Platforms

    Directory of Open Access Journals (Sweden)

    Stuart D. Collyer

    2010-05-01

    Full Text Available The development, manufacture, modification and subsequent utilisation of sonochemically-formed microelectrode arrays is described for a range of applications. Initial fabrication of the sensing platform utilises ultrasonic ablation of electrochemically insulating polymers deposited upon conductive carbon substrates, forming an array of up to 70,000 microelectrode pores cm–2. Electrochemical and optical analyses using these arrays, their enhanced signal response and stir-independence area are all discussed. The growth of conducting polymeric “mushroom” protrusion arrays with entrapped biological entities, thereby forming biosensors is detailed. The simplicity and inexpensiveness of this approach, lending itself ideally to mass fabrication coupled with unrivalled sensitivity and stir independence makes commercial viability of this process a reality. Application of microelectrode arrays as functional components within sensors include devices for detection of chlorine, glucose, ethanol and pesticides. Immunosensors based on microelectrode arrays are described within this monograph for antigens associated with prostate cancer and transient ischemic attacks (strokes.

  14. Imaging spectroscopy using embedded diffractive optical arrays

    Science.gov (United States)

    Hinnrichs, Michele; Hinnrichs, Bradford

    2017-09-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera based on diffractive optic arrays. This approach to hyperspectral imaging has been demonstrated in all three infrared bands SWIR, MWIR and LWIR. The hyperspectral optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of this infrared hyperspectral sensor. This new and innovative approach to an infrared hyperspectral imaging spectrometer uses micro-optics that are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a small satellite, mini-UAV, commercial quadcopter or man portable. Also, an application of how this spectral imaging technology can easily be used to quantify the mass and volume flow rates of hydrocarbon gases. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. The detector array is divided into sub-images covered by each lenslet. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the number of simultaneous different spectral images collected each frame of the camera. A 2 x 2 lenslet array will image

  15. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Andrea Trucco

    2015-06-01

    Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  16. LOFAR, the low frequency array

    Science.gov (United States)

    Vermeulen, R. C.

    2012-09-01

    LOFAR, the Low Frequency Array, is a next-generation radio telescope designed by ASTRON, with antenna stations concentrated in the north of the Netherlands and currently spread into Germany, France, Sweden and the United Kingdom; plans for more LOFAR stations exist in several other countries. Utilizing a novel, phased-array design, LOFAR is optimized for the largely unexplored low frequency range between 30 and 240 MHz. Digital beam-forming techniques make the LOFAR system agile and allow for rapid re-pointing of the telescopes as well as the potential for multiple simultaneous observations. Processing (e.g. cross-correlation) takes place in the LOFAR BlueGene/P supercomputer, and associated post-processing facilities. With its dense core (inner few km) array and long (more than 1000 km) interferometric baselines, LOFAR reaches unparalleled sensitivity and resolution in the low frequency radio regime. The International LOFAR Telescope (ILT) is now issuing its first call for observing projects that will be peer reviewed and selected for observing starting in December. Part of the allocations will be made on the basis of a fully Open Skies policy; there are also reserved fractions assigned by national consortia in return for contributions from their country to the ILT. In this invited talk, the gradually expanding complement of operationally verified observing modes and capabilities are reviewed, and some of the exciting first astronomical results are presented.

  17. An efficient method for evaluating RRAM crossbar array performance

    Science.gov (United States)

    Song, Lin; Zhang, Jinyu; Chen, An; Wu, Huaqiang; Qian, He; Yu, Zhiping

    2016-06-01

    An efficient method is proposed in this paper to mitigate computational burden in resistive random access memory (RRAM) array simulation. In the worst case scenario, a 4 Mb RRAM array with line resistance is greatly reduced using this method. For 1S1R-RRAM array structures, static and statistical parameters in both reading and writing processes are simulated. Error analysis is performed to prove the reliability of the algorithm when line resistance is extremely small compared with the junction resistance. Results show that high precision is maintained even if the size of RRAM array is reduced by one thousand times, which indicates significant improvements in both computational efficiency and memory requirements.

  18. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  19. Comparison of Computational and Experimental Microphone Array Results for an 18%-Scale Aircraft Model

    Science.gov (United States)

    Lockard, David P.; Humphreys, William M.; Khorrami, Mehdi R.; Fares, Ehab; Casalino, Damiano; Ravetta, Patricio A.

    2015-01-01

    An 18%-scale, semi-span model is used as a platform for examining the efficacy of microphone array processing using synthetic data from numerical simulations. Two hybrid RANS/LES codes coupled with Ffowcs Williams-Hawkings solvers are used to calculate 97 microphone signals at the locations of an array employed in the NASA LaRC 14x22 tunnel. Conventional, DAMAS, and CLEAN-SC array processing is applied in an identical fashion to the experimental and computational results for three different configurations involving deploying and retracting the main landing gear and a part span flap. Despite the short time records of the numerical signals, the beamform maps are able to isolate the noise sources, and the appearance of the DAMAS synthetic array maps is generally better than those from the experimental data. The experimental CLEAN-SC maps are similar in quality to those from the simulations indicating that CLEAN-SC may have less sensitivity to background noise. The spectrum obtained from DAMAS processing of synthetic array data is nearly identical to the spectrum of the center microphone of the array, indicating that for this problem array processing of synthetic data does not improve spectral comparisons with experiment. However, the beamform maps do provide an additional means of comparison that can reveal differences that cannot be ascertained from spectra alone.

  20. Micromolding for ceramic microneedle arrays

    NARCIS (Netherlands)

    van Nieuwkasteele-Bystrova, Svetlana Nikolajevna; Lüttge, Regina

    2011-01-01

    The fabrication process of ceramic microneedle arrays (MNAs) is presented. This includes the manufacturing of an SU-8/Si-master, its double replication resulting in a PDMS mold for production by micromolding and ceramic sintering. The robustness of the replicated structures was tested by means of

  1. A novel method of microneedle array fabrication using inclined deep x-ray exposure

    International Nuclear Information System (INIS)

    Moon, Sang Jun; Jin, Chun Yan; Lee, Seung S

    2006-01-01

    We report a novel fabrication method for the microneedle array with a 3-dimensional feature and its replication method; 'Hot-pressing' process with bio-compatible material, PLLA (Poly L-LActide). Using inclined deep X-ray exposure technique, we fabricate a band type microneedle array with a single body on the same material basement. Since the single body feature does not make adhesion problem with the microneedle shank and basement during peel-off step of a mold, the PMMA (Poly-Methyl-MethAcrylate) microneedle array mold insert can be used for mold process which is used with the soft material mold, PDMS (Poly-Di- Methyl-Siloxane). The side inclined deep X-ray exposure also makes complex 3-dimensional features by the regions which are not exposed during twice successive exposure steps. In addition, the successive exposure does not need an additional mask alignment after the first side exposure. The fabricated band type microneedle array mold inserts are assembled for large area patch type out-of-plane microneedle array. The bio-compatible microneedle array can be fabricated to the laboratory scale mass production by the single body PMMA mold insert and 'Hot-pressing' process

  2. A novel method of microneedle array fabrication using inclined deep x-ray exposure

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Sang Jun; Jin, Chun Yan; Lee, Seung S [Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), 373-1, Guseong-dong, Yuseong-dong, Daejeon (Korea, Republic of)

    2006-04-01

    We report a novel fabrication method for the microneedle array with a 3-dimensional feature and its replication method; 'Hot-pressing' process with bio-compatible material, PLLA (Poly L-LActide). Using inclined deep X-ray exposure technique, we fabricate a band type microneedle array with a single body on the same material basement. Since the single body feature does not make adhesion problem with the microneedle shank and basement during peel-off step of a mold, the PMMA (Poly-Methyl-MethAcrylate) microneedle array mold insert can be used for mold process which is used with the soft material mold, PDMS (Poly-Di- Methyl-Siloxane). The side inclined deep X-ray exposure also makes complex 3-dimensional features by the regions which are not exposed during twice successive exposure steps. In addition, the successive exposure does not need an additional mask alignment after the first side exposure. The fabricated band type microneedle array mold inserts are assembled for large area patch type out-of-plane microneedle array. The bio-compatible microneedle array can be fabricated to the laboratory scale mass production by the single body PMMA mold insert and 'Hot-pressing' process.

  3. Solving “Antenna Array Thinning Problem” Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Rajashree Jain

    2012-01-01

    Full Text Available Thinning involves reducing total number of active elements in an antenna array without causing major degradation in system performance. Dynamic thinning is the process of achieving this under real-time conditions. It is required to find a strategic subset of antenna elements for thinning so as to have its optimum performance. From a mathematical perspective this is a nonlinear, multidimensional problem with multiple objectives and many constraints. Solution for such problem cannot be obtained by classical analytical techniques. It will be required to employ some type of search algorithm which can lead to a practical solution in an optimal. The present paper discusses an approach of using genetic algorithm for array thinning. After discussing the basic concept involving antenna array, array thinning, dynamic thinning, and application methodology, simulation results of applying the technique to linear and planar arrays are presented.

  4. Estimation of surface impedance using different types of microphone arrays

    DEFF Research Database (Denmark)

    Richard, Antoine Philippe André; Fernandez Grande, Efren; Brunskog, Jonas

    2017-01-01

    This study investigates microphone array methods to measure the angle dependent surface impedance of acoustic materials. The methods are based on the reconstruction of the sound field on the surface of the material, using a wave expansion formulation. The reconstruction of both the pressure...... and the particle velocity leads to an estimation of the surface impedance for a given angle of incidence. A porous type absorber sample is tested experimentally in anechoic conditions for different array geometries, sample sizes, incidence angles, and distances between the array and sample. In particular......, the performances of a rigid spherical array and a double layer planar array are examined. The use of sparse array processing methods and conventional regulariation approaches are studied. In addition, the influence of the size of the sample on the surface impedance estimation is investigated using both...

  5. Coupling in reflector arrays

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1968-01-01

    In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present communic......In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present...

  6. Low power signal processing research at Stanford

    Science.gov (United States)

    Burr, J.; Williamson, P. R.; Peterson, A.

    1991-01-01

    This paper gives an overview of the research being conducted at Stanford University's Space, Telecommunications, and Radioscience Laboratory in the area of low energy computation. It discusses the work we are doing in large scale digital VLSI neural networks, interleaved processor and pipelined memory architectures, energy estimation and optimization, multichip module packaging, and low voltage digital logic.

  7. Solar cell array design handbook - The principles and technology of photovoltaic energy conversion

    Science.gov (United States)

    Rauschenbach, H. S.

    1980-01-01

    Photovoltaic solar cell array design and technology for ground-based and space applications are discussed from the user's point of view. Solar array systems are described, with attention given to array concepts, historical development, applications and performance, and the analysis of array characteristics, circuits, components, performance and reliability is examined. Aspects of solar cell array design considered include the design process, photovoltaic system and detailed array design, and the design of array thermal, radiation shielding and electromagnetic components. Attention is then given to the characteristics and design of the separate components of solar arrays, including the solar cells, optical elements and mechanical elements, and the fabrication, testing, environmental conditions and effects and material properties of arrays and their components are discussed.

  8. Ultrathin NbN film superconducting single-photon detector array

    International Nuclear Information System (INIS)

    Smirnov, K; Korneev, A; Minaeva, O; Divochiy, A; Tarkhov, M; Ryabchun, S; Seleznev, V; Kaurova, N; Voronov, B; Gol'tsman, G; Polonsky, S

    2007-01-01

    We report on the fabrication process of the 2 x 2 superconducting single-photon detector (SSPD) array. The SSPD array is made from ultrathin NbN film and is operated at liquid helium temperatures. Each detector is a nanowire-based structure patterned by electron beam lithography process. The advances in fabrication technology allowed us to produce highly uniform strips and preserve superconducting properties of the unpatterned film. SSPD exhibit up to 30% quantum efficiency in near infrared and up to 1% at 5-μm wavelength. Due to 120 MHz counting rate and 18 ps jitter, the time-domain multiplexing read-out is proposed for large scale SSPD arrays. Single-pixel SSPD has already found a practical application in non-invasive testing of semiconductor very-large scale integrated circuits. The SSPD significantly outperformed traditional single-photon counting avalanche diodes

  9. Optimizing laser beam profiles using micro-lens arrays for efficient material processing: applications to solar cells

    Science.gov (United States)

    Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij

    2009-02-01

    High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.

  10. VLSI for High-Speed Digital Signal Processing

    Science.gov (United States)

    1994-09-30

    particular, the design, layout and fab - rication of integrated circuits. The primary project for this grant has been the design and implementation of a...targeted at 33.36 dB, and PSNR (dB) Rate ( bpp ) the FRSBC algorithm, targeted at 0.5 bits/pixel, respec- Filter FDSBC FRSBC FDSBC FRSBC tively. The filter...to mean square error d by as shown in Fig. 6, is used, yielding a total of 16 subbands. 255’ The rates, in bits per pixel ( bpp ), and the peak signal

  11. A Platform for Manufacturable Stretchable Micro-electrode Arrays

    NARCIS (Netherlands)

    Khoshfetrat Pakazad, S.; Savov, A.; Braam, S.R.; Dekker, R.

    2012-01-01

    A platform for the batch fabrication of pneumatically actuated Stretchable Micro-Electrode Arrays (SMEAs) by using state-of-the-art micro-fabrication techniques and materials is demonstrated. The proposed fabrication process avoids the problems normally associated with processing of thin film

  12. The Kepler DB, a Database Management System for Arrays, Sparse Arrays and Binary Data

    Science.gov (United States)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-01-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30-minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database (Kepler DB) management system was created to act as the repository of this information. After one year of ight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  13. The Kepler DB: a database management system for arrays, sparse arrays, and binary data

    Science.gov (United States)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-07-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30 minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database management system (Kepler DB)was created to act as the repository of this information. After one year of flight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one-dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  14. Aleutian Array of Arrays (A-cubed) to probe a broad spectrum of fault slip under the Aleutian Islands

    Science.gov (United States)

    Ghosh, A.; LI, B.

    2016-12-01

    Alaska-Aleutian subduction zone is one of the most seismically active subduction zones in this planet. It is characterized by remarkable along-strike variations in seismic behavior, more than 50 active volcanoes, and presents a unique opportunity to serve as a natural laboratory to study subduction zone processes including fault dynamics. Yet details of the seismicity pattern, spatiotemporal distribution of slow earthquakes, nature of interaction between slow and fast earthquakes and their implication on the tectonic behavior remain unknown. We use a hybrid seismic network approach and install 3 mini seismic arrays and 5 stand-alone stations to simultaneously image subduction fault and nearby volcanic system (Makushin). The arrays and stations are strategically located in the Unalaska Island, where prolific tremor activity is detected and located by a solo pilot array in summer 2012. The hybrid network is operational between summer 2015 and 2016 in continuous mode. One of the three arrays starts in summer 2014 and provides additional data covering a longer time span. The pilot array in the Akutan Island recorded continuous seismic data for 2 months. An automatic beam-backprojection analysis detects almost daily tremor activity, with an average of more than an hour per day. We imaged two active sources separated by a tremor gap. The western source, right under the Unalaska Island shows the most prolific activity with a hint of steady migration. In addition, we are able to identify more than 10 families of low frequency earthquakes (LFEs) in this area. They are located within the tremor source area as imaged by the bean-backprojection technique. Application of a match filter technique reveals that intervals between LFE activities are shorter during tremor activity and longer during quiet time period. We expect to present new results from freshly obtained data. The experiment A-cubed is illuminating subduction zone processes under Unalaska Island in unprecedented

  15. Low-cost solar array progress and plans

    Science.gov (United States)

    Callaghan, W. T.

    It is pointed out that significant redirection has occurred in the U.S. Department of Energy (DOE) Photovoltaics Program, and thus in the Flat-Plate Solar Array Project (FSA), since the 3rd European Communities Conference. The Silicon Materials Task has now the objective to sponsor theoretical and experimental research on silicon material refinement technology suitable for photovoltaic flat-plate solar arrays. With respect to the hydrochlorination reaction, a process proof of concept was completed through definition of reaction kinetics, catalyst, and reaction characteristics. In connection with the dichlorosilane chemical vapor desposition process, a preliminary design was completed of an experimental process system development unit with a capacity of 100 to 200 MT/yr of Si.Attention is also given to the silicon-sheet formation research area, environmental isolation research, the cell and module formation task, the engineering sciences area, and the module performance and failure analysis area.

  16. Concurrent array-based queue

    Science.gov (United States)

    Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2015-01-06

    According to one embodiment, a method for implementing an array-based queue in memory of a memory system that includes a controller includes configuring, in the memory, metadata of the array-based queue. The configuring comprises defining, in metadata, an array start location in the memory for the array-based queue, defining, in the metadata, an array size for the array-based queue, defining, in the metadata, a queue top for the array-based queue and defining, in the metadata, a queue bottom for the array-based queue. The method also includes the controller serving a request for an operation on the queue, the request providing the location in the memory of the metadata of the queue.

  17. The Argonne silicon strip-detector array

    Energy Technology Data Exchange (ETDEWEB)

    Wuosmaa, A H; Back, B B; Betts, R R; Freer, M; Gehring, J; Glagola, B G; Happ, Th; Henderson, D J; Wilt, P [Argonne National Lab., IL (United States); Bearden, I G [Purdue Univ., Lafayette, IN (United States). Dept. of Physics

    1992-08-01

    Many nuclear physics experiments require the ability to analyze events in which large numbers of charged particles are detected and identified simultaneously, with good resolution and high efficiency, either alone, or in coincidence with gamma rays. The authors have constructed a compact large-area detector array to measure these processes efficiently and with excellent energy resolution. The array consists of four double-sided silicon strip detectors, each 5x5 cm{sup 2} in area, with front and back sides divided into 16 strips. To exploit the capability of the device fully, a system to read each strip-detector segment has been designed and constructed, based around a custom-built multi-channel preamplifier. The remainder of the system consists of high-density CAMAC modules, including multi-channel discriminators, charge-sensing analog-to-digital converters, and time-to-digital converters. The array`s performance has been evaluated using alpha-particle sources, and in a number of experiments conducted at Argonne and elsewhere. Energy resolutions of {Delta}E {approx} 20-30 keV have been observed for 5 to 8 MeV alpha particles, as well as time resolutions {Delta}T {<=} 500 ps. 4 figs.

  18. A hollow stainless steel microneedle array to deliver insulin to a diabetic rat

    International Nuclear Information System (INIS)

    Vinayakumar, K B; Rajanna, K; Kulkarni, Prachit G; Ramachandra, S G; Nayak, M M; Hegde, Gopalkrishna M; Dinesh, N S

    2016-01-01

    A novel fabrication process has been described for the development of a hollow stainless steel microneedle array using femto second laser micromachining. Using this method, a complicated microstructure can be fabricated in a single process step without using masks. The mechanical stability of the fabricated microneedle array was measured for axial and transverse loading. Skin histology was carried out to study the microneedle penetration into the rat skin. Fluid flow through the microneedle array was studied for different inlet pressures. The packaging of the microneedle array, to protect the microneedle bore blockage from dust and other atmospheric contaminations, was also considered. Finally, the microneedle array was tested and studied in vivo for insulin delivery to a diabetic rat. The results obtained were compared with the standard subcutaneous delivery with the same dose rate and were found to be in good agreement. (paper)

  19. A hollow stainless steel microneedle array to deliver insulin to a diabetic rat

    Science.gov (United States)

    Vinayakumar, K. B.; Kulkarni, Prachit G.; Nayak, M. M.; Dinesh, N. S.; Hegde, Gopalkrishna M.; Ramachandra, S. G.; Rajanna, K.

    2016-06-01

    A novel fabrication process has been described for the development of a hollow stainless steel microneedle array using femto second laser micromachining. Using this method, a complicated microstructure can be fabricated in a single process step without using masks. The mechanical stability of the fabricated microneedle array was measured for axial and transverse loading. Skin histology was carried out to study the microneedle penetration into the rat skin. Fluid flow through the microneedle array was studied for different inlet pressures. The packaging of the microneedle array, to protect the microneedle bore blockage from dust and other atmospheric contaminations, was also considered. Finally, the microneedle array was tested and studied in vivo for insulin delivery to a diabetic rat. The results obtained were compared with the standard subcutaneous delivery with the same dose rate and were found to be in good agreement.

  20. Tests Of Array Of Flush Pressure Sensors

    Science.gov (United States)

    Larson, Larry J.; Moes, Timothy R.; Siemers, Paul M., III

    1992-01-01

    Report describes tests of array of pressure sensors connected to small orifices flush with surface of 1/7-scale model of F-14 airplane in wind tunnel. Part of effort to determine whether pressure parameters consisting of various sums, differences, and ratios of measured pressures used to compute accurately free-stream values of stagnation pressure, static pressure, angle of attack, angle of sideslip, and mach number. Such arrays of sensors and associated processing circuitry integrated into advanced aircraft as parts of flight-monitoring and -controlling systems.

  1. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Science.gov (United States)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  2. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  3. High-Performance Flexible Force and Temperature Sensing Array with a Robust Structure

    Science.gov (United States)

    Kim, Min-Seok; Song, Han-Wook; Park, Yon-Kyu

    We have developed a flexible tactile sensor array capable of sensing physical quantities, e.g. force and temperature with high-performances and high spatial resolution. The fabricated tactile sensor consists of 8 × 8 force measuring array with 1 mm spacing and a thin metal (copper) temperature sensor. The flexible force sensing array consists of sub-millimetre-size bar-shaped semi-conductor strain gage array attached to a thin and flexible printed circuit board covered by stretchable elastomeric material on both sides. This design incorporates benefits of both materials; the semi-conductor's high performance and the polymer's mechanical flexibility and robustness, while overcoming their drawbacks of those two materials. Special fabrication processes, so called “dry-transfer technique” have been used to fabricate the tactile sensor along with standard micro-fabrication processes.

  4. Ordered arrays of polymeric nanopores by using inverse nanostructured PTFE surfaces

    International Nuclear Information System (INIS)

    Martín, Jaime; Martín-González, Marisol; Del Campo, Adolfo; Reinosa, Julián J; Fernández, José Francisco

    2012-01-01

    We present a simple, efficient, and high-throughput methodology for the fabrication of ordered nanoporous polymeric surfaces with areas in the range of cm 2 . The procedure is based on a two-stage replication of a master nanostructured pattern. The process starts with the preparation of an ordered array of poly(tetrafluoroethylene) (PTFE) free-standing nanopillars by wetting self-ordered porous anodic aluminum oxide templates with molten PTFE. The nanopillars are 120 nm in diameter and approximately 350 nm long, while the array extends over cm 2 . The PTFE nanostructuring process induces surface hydrocarbonation of the nanopillars, as revealed by confocal Raman microscopy/spectroscopy, which enhances the wettability of the originally hydrophobic material and facilitates its subsequent use as an inverse pattern. Thus, the PTFE nanostructure is then used as a negative master for the fabrication of macroscopic hexagonal arrays of nanopores composed of biocompatible poly(vinylalcohol). In this particular case, the nanopores are 130–140 nm in diameter and the interpore distance is around 430 nm. Features of such characteristic dimensions are known to be easily recognized by living cells. Moreover, the inverse mold is not destroyed in the pore array demolding process and can be reused for further pore array fabrication. Therefore, the developed method allows the high-throughput production of cm 2 -scale biocompatible nanoporous surfaces that could be interesting as two-dimensional scaffolds for tissue repair or wound healing. Moreover, our approach can be extrapolated to the fabrication of almost any polymer and biopolymer ordered pore array. (paper)

  5. A Hardware Accelerator for Fault Simulation Utilizing a Reconfigurable Array Architecture

    Directory of Open Access Journals (Sweden)

    Sungho Kang

    1996-01-01

    Full Text Available In order to reduce cost and to achieve high speed a new hardware accelerator for fault simulation has been designed. The architecture of the new accelerator is based on a reconfigurabl mesh type processing element (PE array. Circuit elements at the same topological level are simulated concurrently, as in a pipelined process. A new parallel simulation algorithm expands all of the gates to two input gates in order to limit the number of faults to two at each gate, so that the faults can be distributed uniformly throughout the PE array. The PE array reconfiguration operation provides a simulation speed advantage by maximizing the use of each PE cell.

  6. Capacitance of a highly ordered array of nanocapacitors: Model and microscopy

    Science.gov (United States)

    Cortés, A.; Celedón, C.; Ulloa, P.; Kepaptsoglou, D.; Häberle, P.

    2011-11-01

    This manuscript describes briefly the process used to build an ordered porous array in an anodic aluminum oxide (AAO) membrane, filled with multiwall carbon nanotubes (MWCNTs). The MWCNTs were grown directly inside the membrane through chemical vapor deposition (CVD). The role of the CNTs is to provide narrow metal electrodes contact with a dielectric surface barrier, hence, forming a capacitor. This procedure allows the construction of an array of 1010 parallel nano-spherical capacitors/cm2. A central part of this contribution is the use of physical parameters obtained from processing transmission electron microscopy (TEM) images, to predict the specific capacitance of the AAOs arrays. Electrical parameters were obtained by solving Laplace's equation through finite element methods (FEMs).

  7. Three-dimensional digital imaging based on shifted point-array encoding.

    Science.gov (United States)

    Tian, Jindong; Peng, Xiang

    2005-09-10

    An approach to three-dimensional (3D) imaging based on shifted point-array encoding is presented. A kind of point-array structure light is projected sequentially onto the reference plane and onto the object surface to be tested and thus forms a pair of point-array images. A mathematical model is established to formulize the imaging process with the pair of point arrays. This formulation allows for a description of the relationship between the range image of the object surface and the lateral displacement of each point in the point-array image. Based on this model, one can reconstruct each 3D range image point by computing the lateral displacement of the corresponding point on the two point-array images. The encoded point array can be shifted digitally along both the lateral and the longitudinal directions step by step to achieve high spatial resolution. Experimental results show good agreement with the theoretical predictions. This method is applicable for implementing 3D imaging of object surfaces with complex topology or large height discontinuities.

  8. Large-area gold nanohole arrays fabricated by one-step method for surface plasmon resonance biochemical sensing.

    Science.gov (United States)

    Qi, Huijie; Niu, Lihong; Zhang, Jie; Chen, Jian; Wang, Shujie; Yang, Jingjing; Guo, Siyi; Lawson, Tom; Shi, Bingyang; Song, Chunpeng

    2018-04-01

    Surface plasmon resonance (SPR) nanosensors based on metallic nanohole arrays have been widely reported to detect binding interactions in biological specimens. A simple and effective method for constructing nanoscale arrays is essential for the development of SPR nanosensors. In this work, we report a one-step method to fabricate nanohole arrays by thermal nanoimprinting in the matrix of IPS (Intermediate Polymer Stamp). No additional etching process or supporting substrate is required. The preparation process is simple, time-saving and compatible for roll-to-roll process, potentially allowing mass production. Moreover, the nanohole arrays were integrated into detection platform as SPR sensors to investigate different types of biological binding interactions. The results demonstrate that our one-step method can be used to efficiently fabricate large-area and uniform nanohole arrays for biochemical sensing.

  9. A low-power small-area ADC array for IRFPA readout

    Science.gov (United States)

    Zhong, Shengyou; Yao, Libin

    2013-09-01

    The readout integrated circuit (ROIC) is a bridge between the infrared focal plane array (IRFPA) and image processing circuit in an infrared imaging system. The ROIC is the first part of signal processing circuit and connected to detectors directly, so its performance will greatly affect the detector or even the whole imaging system performance. With the development of CMOS technologies, it's possible to digitalize the signal inside the ROIC and develop the digital ROIC. Digital ROIC can reduce complexity of the whole system and improve the system reliability. More importantly, it can accommodate variety of digital signal processing techniques which the traditional analog ROIC cannot achieve. The analog to digital converter (ADC) is the most important building block in the digital ROIC. The requirements for ADCs inside the ROIC are low power, high dynamic range and small area. In this paper we propose an RC hybrid Successive Approximation Register (SAR) ADC as the column ADC for digital ROIC. In our proposed ADC structure, a resistor ladder is used to generate several voltages. The proposed RC hybrid structure not only reduces the area of capacitor array but also releases requirement for capacitor array matching. Theory analysis and simulation show RC hybrid SAR ADC is suitable for ADC array applications

  10. Anodic Aluminum Oxide Templates for Nano wires Array Fabrication

    International Nuclear Information System (INIS)

    Nur Ubaidah Saidin; Kok, K.Y.; Ng, I.K.

    2011-01-01

    This paper reports on the process developed to fabricate anodic aluminium oxide (AAO) templates suitable for the fabrication of nano wire arrays. Anodization process has been used to fabricate the AAO templates with pore diameters ranging from 15 nm to 30 nm. Electrodeposition of parallel arrays of high aspect ratio nickel nano wires were demonstrated using these fabricated AAO templates. The nano wires produced were characterized using X-ray diffraction (XRD) and scanning electron microscopy (SEM). It was found that the orientations of the electrodeposited nickel nano wires were governed by the deposition current and electrolyte conditions. (author)

  11. Silica needle template fabrication of metal hollow microneedle arrays

    International Nuclear Information System (INIS)

    Zhu, M W; Li, H W; Chen, X L; Tang, Y F; Lu, M H; Chen, Y F

    2009-01-01

    Drug delivery through hollow microneedle (HMN) arrays has now been recognized as one of the most promising techniques because it minimizes the shortcomings of the traditional drug delivery methods and has many exciting advantages—pain free and tunable release rates, for example. However, this drug delivery method has been hindered greatly from mass clinical application because of the high fabrication cost of HMN arrays. Hence, we developed a simple and cost-effective procedure using silica needles as templates to massively fabricate HMN arrays by using popular materials and industrially applicable processes of micro- imprint, hot embossing, electroplating and polishing. Metal HMN arrays with high quality are prepared with great flexibility with tunable parameters of area, length of needle, size of hollow and array dimension. This efficient and cost-effective fabrication method can also be applied to other applications after minor alterations, such as preparation of optic, acoustic and solar harvesting materials and devices

  12. Silica needle template fabrication of metal hollow microneedle arrays

    Science.gov (United States)

    Zhu, M. W.; Li, H. W.; Chen, X. L.; Tang, Y. F.; Lu, M. H.; Chen, Y. F.

    2009-11-01

    Drug delivery through hollow microneedle (HMN) arrays has now been recognized as one of the most promising techniques because it minimizes the shortcomings of the traditional drug delivery methods and has many exciting advantages—pain free and tunable release rates, for example. However, this drug delivery method has been hindered greatly from mass clinical application because of the high fabrication cost of HMN arrays. Hence, we developed a simple and cost-effective procedure using silica needles as templates to massively fabricate HMN arrays by using popular materials and industrially applicable processes of micro- imprint, hot embossing, electroplating and polishing. Metal HMN arrays with high quality are prepared with great flexibility with tunable parameters of area, length of needle, size of hollow and array dimension. This efficient and cost-effective fabrication method can also be applied to other applications after minor alterations, such as preparation of optic, acoustic and solar harvesting materials and devices.

  13. Miniaturized Ultrasound Imaging Probes Enabled by CMUT Arrays with Integrated Frontend Electronic Circuits

    Science.gov (United States)

    Khuri-Yakub, B. (Pierre) T.; Oralkan, Ömer; Nikoozadeh, Amin; Wygant, Ira O.; Zhuang, Steve; Gencel, Mustafa; Choe, Jung Woo; Stephens, Douglas N.; de la Rama, Alan; Chen, Peter; Lin, Feng; Dentinger, Aaron; Wildes, Douglas; Thomenius, Kai; Shivkumar, Kalyanam; Mahajan, Aman; Seo, Chi Hyung; O’Donnell, Matthew; Truong, Uyen; Sahn, David J.

    2010-01-01

    Capacitive micromachined ultrasonic transducer (CMUT) arrays are conveniently integrated with frontend integrated circuits either monolithically or in a hybrid multichip form. This integration helps with reducing the number of active data processing channels for 2D arrays. This approach also preserves the signal integrity for arrays with small elements. Therefore CMUT arrays integrated with electronic circuits are most suitable to implement miniaturized probes required for many intravascular, intracardiac, and endoscopic applications. This paper presents examples of miniaturized CMUT probes utilizing 1D, 2D, and ring arrays with integrated electronics. PMID:21097106

  14. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    Science.gov (United States)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  15. The performance of disk arrays in shared-memory database machines

    Science.gov (United States)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  16. Drying induced upright sliding and reorganization of carbon nanotube arrays

    International Nuclear Information System (INIS)

    Li Qingwen; De Paula, Raymond; Zhang Xiefei; Zheng Lianxi; Arendt, Paul N; Mueller, Fred M; Zhu, Y T; Tu Yi

    2006-01-01

    Driven by capillary force, wet carbon nanotube (CNT) arrays have been found to reorganize into cellular structures upon drying. During the reorganization process, individual CNTs are firmly attached to the substrate and have to lie down on the substrate at cell bottoms, forming closed cells. Here we demonstrate that by modifying catalyst structures, the adhesion of CNTs to the substrate can be weakened. Upon drying such CNT arrays, CNTs may slide away from their original sites on the surface and self-assemble into cellular patterns with bottoms open. It is also found that the sliding distance of CNTs increases with array height, and drying millimetre tall arrays leads to the sliding of CNTs over a few hundred micrometres and the eventual self-assembly into discrete islands. By introducing regular vacancies in CNT arrays, CNTs may be manipulated into different patterns

  17. Sorting white blood cells in microfabricated arrays

    Science.gov (United States)

    Castelino, Judith Andrea Rose

    Fractionating white cells in microfabricated arrays presents the potential for detecting cells with abnormal adhesive or deformation properties. A possible application is separating nucleated fetal red blood cells from maternal blood. Since fetal cells are nucleated, it is possible to extract genetic information about the fetus from them. Separating fetal cells from maternal blood would provide a low cost noninvasive prenatal diagnosis for genetic defects, which is not currently available. We present results showing that fetal cells penetrate further into our microfabricated arrays than adult cells, and that it is possible to enrich the fetal cell fraction using the arrays. We discuss modifications to the array which would result in further enrichment. Fetal cells are less adhesive and more deformable than adult white cells. To determine which properties limit penetration, we compared the penetration of granulocytes and lymphocytes in arrays with different etch depths, constriction size, constriction frequency, and with different amounts of metabolic activity. The penetration of lymphocytes and granulocytes into constrained and unconstrained arrays differed qualitatively. In constrained arrays, the cells were activated by repeated shearing, and the number of cells stuck as a function of distance fell superexponentially. In unconstrained arrays the number of cells stuck fell slower than an exponential. We attribute this result to different subpopulations of cells with different sticking parameters. We determined that penetration in unconstrained arrays was limited by metabolic processes, and that when metabolic activity was reduced penetration was limited by deformability. Fetal cells also contain a different form of hemoglobin with a higher oxygen affinity than adult hemoglobin. Deoxygenated cells are paramagnetic and are attracted to high magnetic field gradients. We describe a device which can separate cells using 10 μm magnetic wires to deflect the paramagnetic

  18. Spatio-temporal change detection from multidimensional arrays: Detecting deforestation from MODIS time series

    Science.gov (United States)

    Lu, Meng; Pebesma, Edzer; Sanchez, Alber; Verbesselt, Jan

    2016-07-01

    Growing availability of long-term satellite imagery enables change modeling with advanced spatio-temporal statistical methods. Multidimensional arrays naturally match the structure of spatio-temporal satellite data and can provide a clean modeling process for complex spatio-temporal analysis over large datasets. Our study case illustrates the detection of breakpoints in MODIS imagery time series for land cover change in the Brazilian Amazon using the BFAST (Breaks For Additive Season and Trend) change detection framework. BFAST includes an Empirical Fluctuation Process (EFP) to alarm the change and a change point time locating process. We extend the EFP to account for the spatial autocorrelation between spatial neighbors and assess the effects of spatial correlation when applying BFAST on satellite image time series. In addition, we evaluate how sensitive EFP is to the assumption that its time series residuals are temporally uncorrelated, by modeling it as an autoregressive process. We use arrays as a unified data structure for the modeling process, R to execute the analysis, and an array database management system to scale computation. Our results point to BFAST as a robust approach against mild temporal and spatial correlation, to the use of arrays to ease the modeling process of spatio-temporal change, and towards communicable and scalable analysis.

  19. Optimizing Vector-Quantization Processor Architecture for Intelligent Query-Search Applications

    Science.gov (United States)

    Xu, Huaiyu; Mita, Yoshio; Shibata, Tadashi

    2002-04-01

    The architecture of a very large scale integration (VLSI) vector-quantization processor (VQP) has been optimized to develop a general-purpose intelligent query-search agent. The agent performs a similarity-based search in a large-volume database. Although similarity-based search processing is computationally very expensive, latency-free searches have become possible due to the highly parallel maximum-likelihood search architecture of the VQP chip. Three architectures of the VQP chip have been studied and their performances are compared. In order to give reasonable searching results according to the different policies, the concept of penalty function has been introduced into the VQP. An E-commerce real-estate agency system has been developed using the VQP chip implemented in a field-programmable gate array (FPGA) and the effectiveness of such an agency system has been demonstrated.

  20. A CMOS ASIC Design for SiPM Arrays.

    Science.gov (United States)

    Dey, Samrat; Banks, Lushon; Chen, Shaw-Pin; Xu, Wenbin; Lewellen, Thomas K; Miyaoka, Robert S; Rudell, Jacques C

    2011-12-01

    Our lab has previously reported on novel board-level readout electronics for an 8×8 silicon photomultiplier (SiPM) array featuring row/column summation technique to reduce the hardware requirements for signal processing. We are taking the next step by implementing a monolithic CMOS chip which is based on the row-column architecture. In addition, this paper explores the option of using diagonal summation as well as calibration to compensate for temperature and process variations. Further description of a timing pickoff signal which aligns all of the positioning (spatial channels) pulses in the array is described. The ASIC design is targeted to be scalable with the detector size and flexible to accommodate detectors from different vendors. This paper focuses on circuit implementation issues associated with the design of the ASIC to interface our Phase II MiCES FPGA board with a SiPM array. Moreover, a discussion is provided for strategies to eventually integrate all the analog and mixed-signal electronics with the SiPM, on either a single-silicon substrate or multi-chip module (MCM).

  1. Hollow Nanospheres Array Fabrication via Nano-Conglutination Technology.

    Science.gov (United States)

    Zhang, Man; Deng, Qiling; Xia, Liangping; Shi, Lifang; Cao, Axiu; Pang, Hui; Hu, Song

    2015-09-01

    Hollow nanospheres array is a special nanostructure with great applications in photonics, electronics and biochemistry. The nanofabrication technique with high resolution is crucial to nanosciences and nano-technology. This paper presents a novel nonconventional nano-conglutination technology combining polystyrenes spheres (PSs) self-assembly, conglutination and a lift-off process to fabricate the hollow nanospheres array with nanoholes. A self-assembly monolayer of PSs was stuck off from the quartz wafer by the thiol-ene adhesive material, and then the PSs was removed via a lift-off process and the hollow nanospheres embedded into the thiol-ene substrate was obtained. Thiolene polymer is a UV-curable material via "click chemistry" reaction at ambient conditions without the oxygen inhibition, which has excellent chemical and physical properties to be attractive as the adhesive material in nano-conglutination technology. Using the technique, a hollow nanospheres array with the nanoholes at the diameter of 200 nm embedded into the rigid thiol-ene substrate was fabricated, which has great potential to serve as a reaction container, catalyst and surface enhanced Raman scattering substrate.

  2. Investigation on the Photoelectrocatalytic Activity of Well-Aligned TiO2 Nanotube Arrays

    Directory of Open Access Journals (Sweden)

    Xiaomeng Wu

    2012-01-01

    Full Text Available Well-aligned TiO2 nanotube arrays were fabricated by anodizing Ti foil in viscous F− containing organic electrolytes, and the crystal structure and morphology of the TiO2 nanotube array were characterized and analyzed by XRD, SEM, and TEM, respectively. The photocatalytic activity of the TiO2 nanotube arrays was evaluated in the photocatalytic (PC and photoelectrocatalytic (PEC degradation of methylene blue (MB dye in different supporting solutions. The excellent performance of ca. 97% for color removal was reached after 90 min in the PEC process compared to that of PC process which indicates that a certain external potential bias favors the promotion of the electrode reaction rate on TiO2 nanotube array when it is under illumination. In addition, it is found that PEC process conducted in supporting solutions with low pH and containing Cl− is also beneficial to accelerate the degradation rate of MB.

  3. Ordered arrays of embedded Ga nanoparticles on patterned silicon substrates

    International Nuclear Information System (INIS)

    Bollani, M; Bietti, S; Sanguinetti, S; Frigeri, C; Chrastina, D; Reyes, K; Smereka, P; Millunchick, J M; Vanacore, G M; Tagliaferri, A; Burghammer, M

    2014-01-01

    We fabricate site-controlled, ordered arrays of embedded Ga nanoparticles on Si, using a combination of substrate patterning and molecular-beam epitaxial growth. The fabrication process consists of two steps. Ga droplets are initially nucleated in an ordered array of inverted pyramidal pits, and then partially crystallized by exposure to an As flux, which promotes the formation of a GaAs shell that seals the Ga nanoparticle within two semiconductor layers. The nanoparticle formation process has been investigated through a combination of extensive chemical and structural characterization and theoretical kinetic Monte Carlo simulations. (papers)

  4. The Next-Generation Very Large Array: Technical Overview

    Science.gov (United States)

    McKinnon, Mark; Selina, Rob

    2018-01-01

    As part of its mandate as a national observatory, the NRAO is looking toward the long range future of radio astronomy and fostering the long term growth of the US astronomical community. NRAO has sponsored a series of science and technical community meetings to consider the science mission and design of a next-generation Very Large Array (ngVLA), building on the legacies of the Atacama Large Millimeter/submillimeter Array (ALMA) and the Very Large Array (VLA).The basic ngVLA design emerging from these discussions is an interferometric array with approximately ten times the sensitivity and ten times higher spatial resolution than the VLA and ALMA radio telescopes, optimized for operation in the wavelength range 0.3cm to 3cm. The ngVLA would open a new window on the Universe through ultra-sensitive imaging of thermal line and continuum emission down to milli-arcsecond resolution, as well as unprecedented broadband continuum polarimetric imaging of non-thermal processes. The specifications and concepts for major ngVLA system elements are rapidly converging.We will provide an overview of the current system design of the ngVLA. The concepts for major system elements such as the antenna, receiving electronics, and central signal processing will be presented. We will also describe the major development activities that are presently underway to advance the design.

  5. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  6. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  7. MISSION-ORIENTED SENSOR ARRAYS AND UAVs – A CASE STUDY ON ENVIRONMENTAL MONITORING

    Directory of Open Access Journals (Sweden)

    N. M. Figueira

    2015-08-01

    Full Text Available This paper presents a new concept of UAV mission design in geomatics, applied to the generation of thematic maps for a multitude of civilian and military applications. We discuss the architecture of Mission-Oriented Sensors Arrays (MOSA, proposed in Figueira et Al. (2013, aimed at splitting and decoupling the mission-oriented part of the system (non safety-critical hardware and software from the aircraft control systems (safety-critical. As a case study, we present an environmental monitoring application for the automatic generation of thematic maps to track gunshot activity in conservation areas. The MOSA modeled for this application integrates information from a thermal camera and an on-the-ground microphone array. The use of microphone arrays technology is of particular interest in this paper. These arrays allow estimation of the direction-of-arrival (DOA of the incoming sound waves. Information about events of interest is obtained by the fusion of the data provided by the microphone array, captured by the UAV, fused with information from the termal image processing. Preliminary results show the feasibility of the on-the-ground sound processing array and the simulation of the main processing module, to be embedded into an UAV in a future work. The main contributions of this paper are the proposed MOSA system, including concepts, models and architecture.

  8. Array display tool ADT reference manual. Version 1.2

    International Nuclear Information System (INIS)

    Evans, K. Jr.

    1995-12-01

    Array Display Tool (ADT) is a Motif program to display arrays of process variables from the Advanced Photon Source control system. A typical use is to display the horizontal and vertical monitor readings. A picture of the ADT interface is here. The screen layout, apart from the menu bar, consists of two types of graphic areas in which the values for the arrays of process variables are shown: Display areas, which display one or more arrays as a function of index, and a zoom area. In the zoom area specified arrays only are displayed as a function of lattice position along with symbols for the major elements of the lattice. There can be several display areas, but at most one zoom area. When the screen is resized these areas change size proportionally. There are a number of options in the View Menu to change the way the values are displayed. It is also possible via the Options Menu to: (1) Store the current values internally. (2) Store the values from a snapshot file internally. (3) Display one of the stored sets of values along with the current values. (4) Display the difference of the current values with one of the stored sets of values. (5) Write the current values to a snapshot file. There are several (currently 5) slots in which you can store values internally. In addition you can display the values with specified reference values subtracted

  9. Electrodeposited highly-ordered manganese oxide nanowire arrays for supercapacitors

    Science.gov (United States)

    Liu, Haifeng; Lu, Bingqiang; Wei, Shuiqiang; Bao, Mi; Wen, Yanxuan; Wang, Fan

    2012-07-01

    Large arrays of well-aligned Mn oxide nanowires were prepared by electrodeposition using anodic aluminum oxide templates. The sizes of nanowires were tuned by varying the electrotype solution involved and the MnO2 nanowires with 10 μm in length were obtained in a neutral KMnO4 bath for 1 h. MnO2 nanowire arrays grown on conductor substance save the tedious electrode-making process, and electrochemical characterization demonstrates that the MnO2 nanowire arrays electrode has good capacitive behavior. Due to the limited mass transportation in narrow spacing, the spacing effects between the neighbor nanowires have show great influence to the electrochemical performance.

  10. In-beam conversion electron spectroscopy using the SACRED array

    International Nuclear Information System (INIS)

    Jones, P.M.; Cann, K.J.; Cocks, J.F.C.; Jones, G.D.; Julin, R.; Schulze, B.; Smith, J.F.; Wilson, A.N.

    1997-01-01

    Conversion electron studies of medium-heavy to heavy nuclear mass systems are important where the internal conversion process begins to dominate over gamma-ray emission. The use of a segmented detector array sensitive to conversion electrons has been used to study multiple conversion electron cascades from nuclear transitions. The application of the silicon array for conversion electron detection (SACRED) for in-beam measurements has successfully been implemented. (orig.). With 2 figs

  11. Experimental studies of Z-pinches of mixed wire array with aluminum and tungsten

    International Nuclear Information System (INIS)

    Ning Cheng; Li Zhenghong; Hua Xinsheng; Xu Rongkun; Peng Xianjue; Xu Zeping; Yang Jianlun; Guo Cun; Jiang Shilun; Feng Shuping; Yang Libing; Yan Chengli; Song Fengjun; Smirnov, V.P.; Kalinin, Yu.G.; Kingsep, A.S.; Chernenko, A.S.; Grabovsky, E.V.

    2004-01-01

    In the form of joint experiment between China and Russia, the experimental studies of Z-pinches of mixed wire array of aluminum (A1) and tungsten (W) were carried out on S-300 generator, which was located on Kurchatov Institute of Russia. The experimental results were compared with those of single A1 array and single W array, respectively. There are obvious difference between mixed one and single one in their photon spectral distributions. The intensity of K-series emission lines from the mixed wire array Z-pinch is lower than that from single A1 array. The radiated lines with wavelengths less than 1.6 nm were not found in single W array Z-pinches. In the Z-pinch processes, the area radiating x-rays in mixed wire array is smaller than that of single A1 array, but is slightly lower than that from single W array. The FWHM of x-ray pulse with a maximal power 0.3-0.5 TW and total energy 10-20 kJ is about 25 ns, which radiated from Z-pinches with a radial convergence of 4-5 on S-300 generator. The shadow photograph of the mixed wire-array Z-pinch plasma by laser probe shows that the core-corona configuration was formed and the corona was moving toward the center axis during the wire-array plasma formation, that the interface of the plasma is not clear, and that there are a number structures inside. They also suggests that there was an obvious development of Magneto Rayleigh-Taylor instability in the Z-pinch process as well

  12. Gate protective device for SOS array

    Science.gov (United States)

    Meyer, J. E., Jr.; Scott, J. H.

    1972-01-01

    Protective gate device consisting of alternating heavily doped n(+) and p(+) diffusions eliminates breakdown voltages in silicon oxide on sapphire arrays caused by electrostatic discharge from person or equipment. Diffusions are easily produced during normal double epitaxial processing. Devices with nine layers had 27-volt breakdown.

  13. A silicon pixel detector with routing for external VLSI read-out

    International Nuclear Information System (INIS)

    Thomas, S.L.; Seller, P.

    1988-07-01

    A silicon pixel detector with an array of 32 by 16 hexagonal pixels has been designed and is being built on high resistivity silicon. The detector elements are reverse biased diodes consisting of p-implants in an n-type substrate and are fully depleted from the front to the back of the wafer. They are intended to measure high energy ionising particles traversing the detector. The detailed design of the pixels, their layout and method of read-out are discussed. A number of test structures have been incorporated onto the wafer to enable measurements to be made on individual pixels together with a variety of active devices. The results will give a better understanding of the operation of the pixel array, and will allow testing of computer simulations of more elaborate structures for the future. (author)

  14. Cyclotron-Resonance-Maser Arrays

    International Nuclear Information System (INIS)

    Kesar, A.; Lei, L.; Dikhtyar, V.; Korol, M.; Jerby, E.

    1999-01-01

    The cyclotron-resonance-maser (CRM) array [1] is a radiation source which consists of CRM elements coupled together under a common magnetic field. Each CRM-element employs a low-energy electron-beam which performs a cyclotron interaction with the local electromagnetic wave. These waves can be coupled together among the CRM elements, hence the interaction is coherently synchronized in the entire array. The implementation of the CRM-array approach may alleviate several technological difficulties which impede the development of single-beam gyro-devices. Furthermore, it proposes new features, such as the phased-array antenna incorporated in the CRM-array itself. The CRM-array studies may lead to the development of compact, high-power radiation sources operating at low-voltages. This paper introduces new conceptual schemes of CRM-arrays, and presents the progress in related theoretical and experimental studies in our laboratory. These include a multi-mode analysis of a CRM-array, and a first operation of this device with five carbon-fiber cathodes

  15. Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.

    Science.gov (United States)

    Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng

    2018-05-14

    In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.

  16. Sensor Arrays and Electronic Tongue Systems

    Directory of Open Access Journals (Sweden)

    Manel del Valle

    2012-01-01

    Full Text Available This paper describes recent work performed with electronic tongue systems utilizing electrochemical sensors. The electronic tongues concept is a new trend in sensors that uses arrays of sensors together with chemometric tools to unravel the complex information generated. Initial contributions and also the most used variant employ conventional ion selective electrodes, in which it is named potentiometric electronic tongue. The second important variant is the one that employs voltammetry for its operation. As chemometric processing tool, the use of artificial neural networks as the preferred data processing variant will be described. The use of the sensor arrays inserted in flow injection or sequential injection systems will exemplify attempts made to automate the operation of electronic tongues. Significant use of biosensors, mainly enzyme-based, to form what is already named bioelectronic tongue will be also presented. Application examples will be illustrated with selected study cases from the Sensors and Biosensors Group at the Autonomous University of Barcelona.

  17. Wire Array Solar Cells: Fabrication and Photoelectrochemical Studies

    Science.gov (United States)

    Spurgeon, Joshua Michael

    Despite demand for clean energy to reduce our addiction to fossil fuels, the price of these technologies relative to oil and coal has prevented their widespread implementation. Solar energy has enormous potential as a carbon-free resource but is several times the cost of coal-produced electricity, largely because photovoltaics of practical efficiency require high-quality, pure semiconductor materials. To produce current in a planar junction solar cell, an electron or hole generated deep within the material must travel all the way to the junction without recombining. Radial junction, wire array solar cells, however, have the potential to decouple the directions of light absorption and charge-carrier collection so that a semiconductor with a minority-carrier diffusion length shorter than its absorption depth (i.e., a lower quality, potentially cheaper material) can effectively produce current. The axial dimension of the wires is long enough for sufficient optical absorption while the charge-carriers are collected along the shorter radial dimension in a massively parallel array. This thesis explores the wire array solar cell design by developing potentially low-cost fabrication methods and investigating the energy-conversion properties of the arrays in photoelectrochemical cells. The concept was initially investigated with Cd(Se, Te) rod arrays; however, Si was the primary focus of wire array research because its semiconductor properties make low-quality Si an ideal candidate for improvement in a radial geometry. Fabrication routes for Si wire arrays were explored, including the vapor-liquid-solid growth of wires using SiCl4. Uniform, vertically aligned Si wires were demonstrated in a process that permits control of the wire radius, length, and spacing. A technique was developed to transfer these wire arrays into a low-cost, flexible polymer film, and grow multiple subsequent arrays using a single Si(111) substrate. Photoelectrochemical measurements on Si wire array

  18. Block QCA Fault-Tolerant Logic Gates

    Science.gov (United States)

    Firjany, Amir; Toomarian, Nikzad; Modarres, Katayoon

    2003-01-01

    Suitably patterned arrays (blocks) of quantum-dot cellular automata (QCA) have been proposed as fault-tolerant universal logic gates. These block QCA gates could be used to realize the potential of QCA for further miniaturization, reduction of power consumption, increase in switching speed, and increased degree of integration of very-large-scale integrated (VLSI) electronic circuits. The limitations of conventional VLSI circuitry, the basic principle of operation of QCA, and the potential advantages of QCA-based VLSI circuitry were described in several NASA Tech Briefs articles, namely Implementing Permutation Matrices by Use of Quantum Dots (NPO-20801), Vol. 25, No. 10 (October 2001), page 42; Compact Interconnection Networks Based on Quantum Dots (NPO-20855) Vol. 27, No. 1 (January 2003), page 32; Bit-Serial Adder Based on Quantum Dots (NPO-20869), Vol. 27, No. 1 (January 2003), page 35; and Hybrid VLSI/QCA Architecture for Computing FFTs (NPO-20923), which follows this article. To recapitulate the principle of operation (greatly oversimplified because of the limitation on space available for this article): A quantum-dot cellular automata contains four quantum dots positioned at or between the corners of a square cell. The cell contains two extra mobile electrons that can tunnel (in the quantummechanical sense) between neighboring dots within the cell. The Coulomb repulsion between the two electrons tends to make them occupy antipodal dots in the cell. For an isolated cell, there are two energetically equivalent arrangements (denoted polarization states) of the extra electrons. The cell polarization is used to encode binary information. Because the polarization of a nonisolated cell depends on Coulomb-repulsion interactions with neighboring cells, universal logic gates and binary wires could be constructed, in principle, by arraying QCA of suitable design in suitable patterns. Heretofore, researchers have recognized two major obstacles to realization of QCA

  19. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-01-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are −0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions. (paper)

  20. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    Science.gov (United States)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-07-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are -0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions.

  1. DETECTION OF FAST TRANSIENTS WITH RADIO INTERFEROMETRIC ARRAYS

    International Nuclear Information System (INIS)

    Bhat, N. D. R.; Chengalur, J. N.; Gupta, Y.; Prasad, J.; Roy, J.; Kudale, S. S.; Cox, P. J.; Bailes, M.; Burke-Spolaor, S.; Van Straten, W.

    2013-01-01

    Next-generation radio arrays, including the Square Kilometre Array (SKA) and its pathfinders, will open up new avenues for exciting transient science at radio wavelengths. Their innovative designs, comprising a large number of small elements, pose several challenges in digital processing and optimal observing strategies. The Giant Metre-wave Radio Telescope (GMRT) presents an excellent test-bed for developing and validating suitable observing modes and strategies for transient experiments with future arrays. Here we describe the first phase of the ongoing development of a transient detection system for GMRT that is planned to eventually function in a commensal mode with other observing programs. It capitalizes on the GMRT's interferometric and sub-array capabilities, and the versatility of a new software backend. We outline considerations in the plan and design of transient exploration programs with interferometric arrays, and describe a pilot survey that was undertaken to aid in the development of algorithms and associated analysis software. This survey was conducted at 325 and 610 MHz, and covered 360 deg 2 of the sky with short dwell times. It provides large volumes of real data that can be used to test the efficacies of various algorithms and observing strategies applicable for transient detection. We present examples that illustrate the methodologies of detecting short-duration transients, including the use of sub-arrays for higher resilience to spurious events of terrestrial origin, localization of candidate events via imaging, and the use of a phased array for improved signal detection and confirmation. In addition to demonstrating applications of interferometric arrays for fast transient exploration, our efforts mark important steps in the roadmap toward SKA-era science.

  2. Detection of Fast Transients with Radio Interferometric Arrays

    Science.gov (United States)

    Bhat, N. D. R.; Chengalur, J. N.; Cox, P. J.; Gupta, Y.; Prasad, J.; Roy, J.; Bailes, M.; Burke-Spolaor, S.; Kudale, S. S.; van Straten, W.

    2013-05-01

    Next-generation radio arrays, including the Square Kilometre Array (SKA) and its pathfinders, will open up new avenues for exciting transient science at radio wavelengths. Their innovative designs, comprising a large number of small elements, pose several challenges in digital processing and optimal observing strategies. The Giant Metre-wave Radio Telescope (GMRT) presents an excellent test-bed for developing and validating suitable observing modes and strategies for transient experiments with future arrays. Here we describe the first phase of the ongoing development of a transient detection system for GMRT that is planned to eventually function in a commensal mode with other observing programs. It capitalizes on the GMRT's interferometric and sub-array capabilities, and the versatility of a new software backend. We outline considerations in the plan and design of transient exploration programs with interferometric arrays, and describe a pilot survey that was undertaken to aid in the development of algorithms and associated analysis software. This survey was conducted at 325 and 610 MHz, and covered 360 deg2 of the sky with short dwell times. It provides large volumes of real data that can be used to test the efficacies of various algorithms and observing strategies applicable for transient detection. We present examples that illustrate the methodologies of detecting short-duration transients, including the use of sub-arrays for higher resilience to spurious events of terrestrial origin, localization of candidate events via imaging, and the use of a phased array for improved signal detection and confirmation. In addition to demonstrating applications of interferometric arrays for fast transient exploration, our efforts mark important steps in the roadmap toward SKA-era science.

  3. Design of a new electrode array for cochlear implants

    International Nuclear Information System (INIS)

    Kha, H.; Chen, B.

    2010-01-01

    Full text: This study aims to design a new electrode array which can be precisely located beneath the basilar membrane within the cochlear scala tympani. This placement of the electrode array is beneficial for increasing the effectiveness of the electrical stimulation of the audi tory nerves and maximising the growth factors delivered into the cochlea for regenerating the progressively lost auditory neurons, thereby significantly improving performance of the cochlear implant systems. Methods The design process involved two steps. First, the biocom patible nitinol-based shape memory alloy, of which mechanical deformation can be controlled using electrical cUTents/fields act vated by body temperature, was selected. Second, five different designs of the electrode array with embedded nitinol actuators were studied (Table I). The finite element method was employed to predict final positions of these electrode arrays. Results The electrode array with three 6 mm actuators at 2-8, 8-J4 and 14-20 mm from the tip (Fig. I) was found to be located most closely to the basilar membrane, compared with those in the other four cases. Conclusions A new nitinol cochlear implant electrode array with three embedded nitinol actuators has been designed. This electrode array is expected to be located beneath the basilar membrane for maximising the delivery of growth factors. Future research will involve the manufacturing of a prototype of this electrode array for use in insertion experiments and neurotrophin release tests.

  4. Blending of phased array data

    Science.gov (United States)

    Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno

    2018-04-01

    The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.

  5. A review of array radars

    Science.gov (United States)

    Brookner, E.

    1981-10-01

    Achievements in the area of array radars are illustrated by such activities as the operational deployment of the large high-power, high-range-resolution Cobra Dane; the operational deployment of two all-solid-state high-power, large UHF Pave Paws radars; and the development of the SAM multifunction Patriot radar. This paper reviews the following topics: array radars steered in azimuth and elevation by phase shifting (phase-phase steered arrays); arrays steered + or - 60 deg, limited scan arrays, hemispherical coverage, and omnidirectional coverage arrays; array radars steering electronically in only one dimension, either by frequency or by phase steering; and array radar antennas which use no electronic scanning but instead use array antennas for achieving low antenna sidelobes.

  6. Micro-machined high-frequency (80 MHz) PZT thick film linear arrays.

    Science.gov (United States)

    Zhou, Qifa; Wu, Dawei; Liu, Changgeng; Zhu, Benpeng; Djuth, Frank; Shung, K

    2010-10-01

    This paper presents the development of a micromachined high-frequency linear array using PZT piezoelectric thick films. The linear array has 32 elements with an element width of 24 μm and an element length of 4 mm. Array elements were fabricated by deep reactive ion etching of PZT thick films, which were prepared from spin-coating of PZT sol-gel composite. Detailed fabrication processes, especially PZT thick film etching conditions and a novel transferring-and-etching method, are presented and discussed. Array designs were evaluated by simulation. Experimental measurements show that the array had a center frequency of 80 MHz and a fractional bandwidth (-6 dB) of 60%. An insertion loss of -41 dB and adjacent element crosstalk of -21 dB were found at the center frequency.

  7. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  8. Preparation and properties of novel magnetic composite nanostructures: Arrays of nanowires in porous membranes

    International Nuclear Information System (INIS)

    Vazquez, M.; Hernandez-Velez, M.; Asenjo, A.; Navas, D.; Pirota, K.; Prida, V.; Sanchez, O.; Baldonedo, J.L.

    2006-01-01

    In the present work, we introduce our latest achievements in the development of novel highly ordered composite magnetic nanostructures employing anodized nanoporous membranes as precursor templates where long-range hexagonal symmetry is induced by self-assembling during anodization process. Subsequent processing as electroplating, sputtering or pressing are employed to prepare arrays of metallic, semiconductor or polymeric nanowires embedded in oxide or metallic membranes. Particular attention is paid to recent results on controlling the magnetic anisotropy in arrays of metallic nanowires, particularly Co, and nanohole arrays in Ni membranes

  9. Semiconductor processing with excimer lasers

    International Nuclear Information System (INIS)

    Young, R.T.; Narayan, J.; Christie, W.H.; van der Leeden, G.A.; Rothe, D.E.; Cheng, L.J.

    1983-01-01

    The advantages of pulsed excimer lasers for semiconductor processing are reviewed. Extensive comparisons of the quality of annealing of ion-implanted Si obtained with XeCl and ruby lasers have been made. The results indicate that irrespective of the large differences in the optical properties of Si at uv and visible wavelengths, the efficiency of usage of the incident energy for annealing is comparable for the two lasers. However, because of the excellent optical beam quality, the XeCl laser can provide superior control of the surface melting and the resulting junction depth. Furthermore, the concentrations of electrically active point defects in the XeCl laser annealed region are 2 to 3 orders of magnitude lower than that obtained from ruby or Nd:YAG lasers. All these results seem to suggest that XeCl lasers should be suitable for fabricating not only solar cells but also the more advanced device structures required for VLSI or VHSIC applications

  10. The EUROBALL array

    International Nuclear Information System (INIS)

    Rossi Alvarez, C.

    1998-01-01

    The quality of the multidetector array EUROBALL is described, with emphasis on the history and formal organization of the related European collaboration. The detector layout is presented together with the electronics and Data Acquisition capabilities. The status of the instrument, its performances and the main features of some recently developed ancillary detectors will also be described. The EUROBALL array is operational in Legnaro National Laboratory (Italy) since April 1997 and is expected to run up to November 1998. The array represents a significant improvement in detector efficiency and sensitivity with respect to the previous generation of multidetector arrays

  11. A polychromator-type near-infrared spectrometer with a high-sensitivity and high-resolution photodiode array detector for pharmaceutical process monitoring on the millisecond time scale.

    Science.gov (United States)

    Murayama, Kodai; Genkawa, Takuma; Ishikawa, Daitaro; Komiyama, Makoto; Ozaki, Yukihiro

    2013-02-01

    In the fine chemicals industry, particularly in the pharmaceutical industry, advanced sensing technologies have recently begun being incorporated into the process line in order to improve safety and quality in accordance with process analytical technology. For estimating the quality of powders without preparation during drug formulation, near-infrared (NIR) spectroscopy has been considered the most promising sensing approach. In this study, we have developed a compact polychromator-type NIR spectrometer equipped with a photodiode (PD) array detector. This detector is consisting of 640 InGaAs-PD elements with 20-μm pitch. Some high-specification spectrometers, which use InGaAs-PD with 512 elements, have a wavelength resolution of about 1.56 nm when covering 900-1700 nm range. On the other hand, the newly developed detector, having the PD with one of the world's highest density, enables wavelength resolution of below 1.25 nm. Moreover, thanks to the combination with a highly integrated charge amplifier array circuit, measurement speed of the detector is higher by two orders than that of existing PD array detectors. The developed spectrometer is small (120 mm × 220 mm × 200 mm) and light (6 kg), and it contains various key devices including the high-density and high-sensitivity PD array detector, NIR technology, and spectroscopy technology for a spectroscopic analyzer that has the required detection mechanism and high sensitivity for powder measurement, as well as a high-speed measuring function for blenders. Moreover, we have evaluated the characteristics of the developed NIR spectrometer, and the measurement of powder samples confirmed that it has high functionality.

  12. A polychromator-type near-infrared spectrometer with a high-sensitivity and high-resolution photodiode array detector for pharmaceutical process monitoring on the millisecond time scale

    Science.gov (United States)

    Murayama, Kodai; Genkawa, Takuma; Ishikawa, Daitaro; Komiyama, Makoto; Ozaki, Yukihiro

    2013-02-01

    In the fine chemicals industry, particularly in the pharmaceutical industry, advanced sensing technologies have recently begun being incorporated into the process line in order to improve safety and quality in accordance with process analytical technology. For estimating the quality of powders without preparation during drug formulation, near-infrared (NIR) spectroscopy has been considered the most promising sensing approach. In this study, we have developed a compact polychromator-type NIR spectrometer equipped with a photodiode (PD) array detector. This detector is consisting of 640 InGaAs-PD elements with 20-μm pitch. Some high-specification spectrometers, which use InGaAs-PD with 512 elements, have a wavelength resolution of about 1.56 nm when covering 900-1700 nm range. On the other hand, the newly developed detector, having the PD with one of the world's highest density, enables wavelength resolution of below 1.25 nm. Moreover, thanks to the combination with a highly integrated charge amplifier array circuit, measurement speed of the detector is higher by two orders than that of existing PD array detectors. The developed spectrometer is small (120 mm × 220 mm × 200 mm) and light (6 kg), and it contains various key devices including the high-density and high-sensitivity PD array detector, NIR technology, and spectroscopy technology for a spectroscopic analyzer that has the required detection mechanism and high sensitivity for powder measurement, as well as a high-speed measuring function for blenders. Moreover, we have evaluated the characteristics of the developed NIR spectrometer, and the measurement of powder samples confirmed that it has high functionality.

  13. Challenging aspects of contemporary cochlear implant electrode array design.

    Science.gov (United States)

    Mistrík, Pavel; Jolly, Claude; Sieber, Daniel; Hochmair, Ingeborg

    2017-12-01

    A design comparison of current perimodiolar and lateral wall electrode arrays of the cochlear implant (CI) is provided. The focus is on functional features such as acoustic frequency coverage and tonotopic mapping, battery consumption and dynamic range. A traumacity of their insertion is also evaluated. Review of up-to-date literature. Perimodiolar electrode arrays are positioned in the basal turn of the cochlea near the modiolus. They are designed to initiate the action potential in the proximity to the neural soma located in spiral ganglion. On the other hand, lateral wall electrode arrays can be inserted deeper inside the cochlea, as they are located along the lateral wall and such insertion trajectory is less traumatic. This class of arrays targets primarily surviving neural peripheral processes. Due to their larger insertion depth, lateral wall arrays can deliver lower acoustic frequencies in manner better corresponding to cochlear tonotopicity. In fact, spiral ganglion sections containing auditory nerve fibres tuned to low acoustic frequencies are located deeper than 1 and half turn inside the cochlea. For this reason, a significant frequency mismatch might be occurring for apical electrodes in perimodiolar arrays, detrimental to speech perception. Tonal languages such as Mandarin might be therefore better treated with lateral wall arrays. On the other hand, closer proximity to target tissue results in lower psychophysical threshold levels for perimodiolar arrays. However, the maximal comfort level is also lower, paradoxically resulting in narrower dynamic range than that of lateral wall arrays. Battery consumption is comparable for both types of arrays. Lateral wall arrays are less likely to cause trauma to cochlear structures. As the current trend in cochlear implantation is the maximal protection of residual acoustic hearing, the lateral wall arrays seem more suitable for hearing preservation CI surgeries. Future development could focus on combining the

  14. Testing of ITER central solenoid coil insulation in an array

    International Nuclear Information System (INIS)

    Jayakumar, R.; Martovetsky, N.N.; Perfect, S.A.

    1995-01-01

    A glass-polyimide insulation system has been proposed by the US team for use in the Central Solenoid (CS) coil of the international Thermonuclear Experimental Reactor (ITER) machine and it is planned to use this system in the CS model coil inner module. The turn insulation will consist of 2 layers of combined prepreg and Kapton. Each layer is 50% overlapped with a butt wrap of prepreg and an overwrap of S glass. The coil layers will be separated by a glass-resin composite and impregnated in a VPI process. Small scale tests on the various components of the insulation are complete. It is planned to fabricate and test the insulation in a 4 x 4 insulated CS conductor array which will include the layer insulation and be vacuum impregnated. The conductor array will be subjected to 20 thermal cycles and 100000 mechanical load cycles in a Liquid Nitrogen environment. These loads are similar to those seen in the CS coil design. The insulation will be electrically tested at several stages during mechanical testing. This paper will describe the array configuration, fabrication: process, instrumentation, testing configuration, and supporting analyses used in selecting the array and test configurations

  15. Ultra-wideband WDM VCSEL arrays by lateral heterogeneous integration

    Science.gov (United States)

    Geske, Jon

    Advancements in heterogeneous integration are a driving factor in the development of evermore sophisticated and functional electronic and photonic devices. Such advancements will merge the optical and electronic capabilities of different material systems onto a common integrated device platform. This thesis presents a new lateral heterogeneous integration technology called nonplanar wafer bonding. The technique is capable of integrating multiple dissimilar semiconductor device structures on the surface of a substrate in a single wafer bond step, leaving different integrated device structures adjacent to each other on the wafer surface. Material characterization and numerical simulations confirm that the material quality is not compromised during the process. Nonplanar wafer bonding is used to fabricate ultra-wideband wavelength division multiplexed (WDM) vertical-cavity surface-emitting laser (VCSEL) arrays. The optically-pumped VCSEL arrays span 140 nm from 1470 to 1610 nm, a record wavelength span for devices operating in this wavelength range. The array uses eight wavelength channels to span the 140 nm with all channels separated by precisely 20 nm. All channels in the array operate single mode to at least 65°C with output power uniformity of +/- 1 dB. The ultra-wideband WDM VCSEL arrays are a significant first step toward the development of a single-chip source for optical networks based on coarse WDM (CWDM), a low-cost alternative to traditional dense WDM. The CWDM VCSEL arrays make use of fully-oxidized distributed Bragg reflectors (DBRs) to provide the wideband reflectivity required for optical feedback and lasing across 140 rim. In addition, a novel optically-pumped active region design is presented. It is demonstrated, with an analytical model and experimental results, that the new active-region design significantly improves the carrier uniformity in the quantum wells and results in a 50% lasing threshold reduction and a 20°C improvement in the peak

  16. Restoring Low Sidelobe Antenna Patterns with Failed Elements in a Phased Array Antenna

    Science.gov (United States)

    2016-02-01

    optimum low sidelobes are demonstrated in several examples. Index Terms — Array signal processing, beams, linear algebra , phased arrays, shaped...beam antennas. I. INTRODUCTION For many phased array antenna applications , low spatial sidelobes are required, and it is desirable to maintain...represented by a linear combination of low sidelobe beamformers with no failed elements, ’s, in a neighborhood around under the constraint that the linear

  17. Remanence coercivity of dot arrays of hcp-CoPt perpendicular films

    Energy Technology Data Exchange (ETDEWEB)

    Mitsuzuka, K; Shimatsu, T; Aoi, H [Research Institute of Electrical Communication, Tohoku University, Sendai, 980-8577 (Japan); Kikuchi, N; Okamoto, S; Kitakami, O, E-mail: shimatsu@riec.tohoku.ac.j [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai, 980-8577 (Japan)

    2010-01-01

    The remanence coercivity, H{sub r}, of hcp-CoPt dot arrays with various dot thicknesses, {delta}, (3 and 10 nm) and Pt content (20-30at%) were experimentally investigated as a function of the dot diameter, D(30-400 nm). All dot arrays showed a single domain state, even after removal of an applied field equal to H{sub r}. The angular dependence of H{sub r} for the dot arrays indicated coherent rotation of the magnetization during nucleation. H{sub r} increased as Ddecreased in all series of dot arrays with various {delta} and Pt content. Assuming that the nucleation field of a dot is determined by the switching field of a grain having the smallest switching field, we calculated the value of nucleation field H{sub n}{sup cal} taking account of the c-axis distribution and the distribution of the demagnetizing field in the dot. The values of H{sub r} obtained experimentally are in good agreement with those of H{sub n}{sup cal}, taking account of thermal agitation of magnetization. This result suggested that the reversal process of hcp-CoPt dot arrays starts from a nucleation at the center of the dot followed by a propagation process.

  18. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  19. Gravitational wave searches with pulsar timing arrays: Cancellation of clock and ephemeris noises

    Science.gov (United States)

    Tinto, Massimo

    2018-04-01

    We propose a data processing technique to cancel monopole and dipole noise sources (such as clock and ephemeris noises, respectively) in pulsar timing array searches for gravitational radiation. These noises are the dominant sources of correlated timing fluctuations in the lower-part (≈10-9-10-8 Hz ) of the gravitational wave band accessible by pulsar timing experiments. After deriving the expressions that reconstruct these noises from the timing data, we estimate the gravitational wave sensitivity of our proposed processing technique to single-source signals to be at least one order of magnitude higher than that achievable by directly processing the timing data from an equal-size array. Since arrays can generate pairs of clock and ephemeris-free timing combinations that are no longer affected by correlated noises, we implement with them the cross-correlation statistic to search for an isotropic stochastic gravitational wave background. We find the resulting optimal signal-to-noise ratio to be more than one order of magnitude larger than that obtainable by correlating pairs of timing data from arrays of equal size.

  20. Servo scanning 3D micro EDM for array micro cavities using on-machine fabricated tool electrodes

    Science.gov (United States)

    Tong, Hao; Li, Yong; Zhang, Long

    2018-02-01

    Array micro cavities are useful in many fields including in micro molds, optical devices, biochips and so on. Array servo scanning micro electro discharge machining (EDM), using array micro electrodes with simple cross-sectional shape, has the advantage of machining complex 3D micro cavities in batches. In this paper, the machining errors caused by offline-fabricated array micro electrodes are analyzed in particular, and then a machining process of array servo scanning micro EDM is proposed by using on-machine fabricated array micro electrodes. The array micro electrodes are fabricated on-machine by combined procedures including wire electro discharge grinding, array reverse copying and electrode end trimming. Nine-array tool electrodes with Φ80 µm diameter and 600 µm length are obtained. Furthermore, the proposed process is verified by several machining experiments for achieving nine-array hexagonal micro cavities with top side length of 300 µm, bottom side length of 150 µm, and depth of 112 µm or 120 µm. In the experiments, a chip hump accumulates on the electrode tips like the built-up edge in mechanical machining under the conditions of brass workpieces, copper electrodes and the dielectric of deionized water. The accumulated hump can be avoided by replacing the water dielectric by an oil dielectric.